Only this pageAll pages
Powered by GitBook
1 of 75

9.x

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Introduction

Snow Owl® is a highly scalable terminology server with revision-control capabilities and collaborative authoring platform features. It allows you to store, search, and author high volumes of terminology artifacts quickly and efficiently.

Example use cases

  • You work in the healthcare industry and are interested in using a terminology server for browsing, accessing, and distributing components of various terminologies and classifications to third-party consumers. In this case, you can use Snow Owl to load the necessary terminologies and access them via FHIR and proprietary APIs.

  • You are responsible for maintaining and publishing new versions of a particular terminology. In this case, you can use Snow Owl to access collaboratively and author the terminology content and at the end of your release schedule publish it with confidence and zero errors.

  • You have an Electronic Health Record system and would like to capture, maintain, and query clinical information in a structured and standardized manner. Your Snow Owl terminology server can integrate with your EHR server via standard APIs to provide the necessary access for both terminology binding and data processing and analytics.

Generic core functionality and SNOMED CT tooling are open-source, all other features require a license.

Features

Revision-controlled authoring and distribution

  • Maintains multiple versions (including unpublished and published) for each terminology artifact and provides APIs to access them all

  • Independent work branches offer work-in-progress isolation, external business workflow integration, and team collaboration

SNOMED CT and others

  • SNOMED CT terminology support

    • RF2 Release File Specification as of 2024-05-01

    • Support for Relationships with concrete values

    • Official and Custom Reference Sets

  • With its modular design, the server can maintain multiple terminologies (including local codes, mapping sets, and value sets)

Various sets of APIs

  • Dedicated SNOMED CT, ATC, ICD-10, LOINC, Local Code System, Value Set, and Concept Map APIs

Highly extensible and configurable

  • Simple to use plug-in system makes it easy to develop and add new terminology tooling/API or any other functionality

Full-text search and data storage

    • Connect to your existing cluster or use the embedded instance (supports up to Elasticsearch 8.x)

    • All the power of Elasticsearch is available (monitoring, analytics, and many more)

Acknowledgments

Numerous other organizations have directly or indirectly contributed to Snow Owl, including:

  • Singapore Ministry of Health

  • American Dental Association

  • University of Nebraska Medical Center (USA)

  • Federal Public Service of Public Health (Belgium)

  • Danish Health Data Authority

  • Health and Welfare Information Systems Centre (Estonia)

  • Department of Health (Ireland)

  • New Zealand Ministry of Health

  • Norwegian Directorate of eHealth

  • Integrated Health Information Systems (Singapore)

  • National Board of Health and Welfare (Sweden)

  • eHealth Suisse (Switzerland)

  • National Library of Medicine (USA)

If you’d like to see Snow Owl in action, the provides a managed terminology server and high-quality terminology content management from your web browser.

Expression Constraint Language v2.1.0 ,

Compositional Grammar 2.3.1 ,

Expression Template Language 1.0.0 ,

FHIR API R5 (R4B and R4 are also supported for certain resource types)

CIS API 1.0

Built on top of (highly scalable, distributed, open source search engine)

In March 2015, generously licensed the Snow Owl Terminology Server components supporting SNOMED CT. They subsequently made the licensed code available to their and the global community under an open-source license.

In March 2017, licensed the Snow Owl Terminology Server to support the mandatory adoption of SNOMED CT throughout all care settings in the United Kingdom by April 2020. In addition to driving the UK’s clinical terminology efforts by providing a platform to author national clinical codes, Snow Owl will support the maintenance and improvement of the dm+d drug extension which alone is used in over 156 million electronic prescriptions per month. Improvements to the terminology server under this agreement were made available to the global community.

🌎
Snowray Terminology Service™
🌎
🌎
specification
implementation
🌎
🌎
specification
implementation
🌎
🌎
specification
implementation
🌎
specification
🌎
reference implementation
🌎
Elasticsearch
🌎
SNOMED International
members
🌎
NHS Digital

Next steps

Now that we have our instance up and running, the next step is to understand how to communicate with it. Fortunately, Snow Owl provides very comprehensive and powerful APIs to interact with your instance.

REST API

Among the few things that can be done with the API are as follows:

  • Perform CRUD (Create, Read, Update, and Delete) and search operations against your terminology resources

  • Execute advanced search operations such as paging, sorting, filtering, scripting, aggregations, and many others

  • Administer your instance data

  • Check your instance health, status, and statistics

Conclusion

Snow Owl is both a simple and complex product. We’ve so far learned the basics of what it is, how to look inside of it, and how to work with it using some of the available APIs. Hopefully, this tutorial has given you a better understanding of what Snow Owl is and more importantly, inspired you to further experiment with the rest of its great features!

Find concepts by ID or term

Search by identifier

curl -u "test:test" 'http://localhost:8080/snowowl/snomedct/SNOMEDCT/concepts/138875005?expand=pt()&pretty'

The response should look something like this:

{
  "id": "138875005",
  "released": true,
  "active": true,
  "effectiveTime": "20020131",
  "moduleId": "900000000000207008",
  "iconId": "snomed_rt_ctv3",
  "score": 0.0,
  "memberOf": [ "900000000000497000" ],
  "activeMemberOf": [ "900000000000497000" ],
  "definitionStatus": {
    "id": "900000000000074008"
  },
  "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES",
  "pt": {
    "id": "220309016",
    "term": "SNOMED CT Concept",
    "concept": {
      "id": "138875005"
    },
    "type": {
      "id": "900000000000013009"
    },
    "typeId": "900000000000013009",
    "conceptId": "138875005",
    "acceptability": {
      "900000000000509007": "PREFERRED",
      "900000000000508004": "PREFERRED"
    }
  },
  "ancestorIds": [ ],
  "parentIds": [ "-1" ],
  "statedAncestorIds": [ ],
  "statedParentIds": [ "-1" ],
  "definitionStatusId": "900000000000074008"
}

We used the expand query parameter to include the concept's Preferred Term (PT) in the response. The concept in question is the root concept of the SNOMED CT hierarchy.

Search by term

Snow Owl also allows users to retrieve concepts matching a specific search term or phrase. See what happens if we try to find the concepts associated with the condition "Méniere's disease":

curl -u "test:test" 'http://localhost:8080/snowowl/snomedct/SNOMEDCT/concepts?term=M%C3%A9niere%27s%20disease&expand=pt()&pretty'

This time more than one concept can be present in the response, so we receive a collection of items. Results are sorted by relevance, indicated by the field score:

{
  "items": [ {
    "id": "13445001",
    "released": true,
    "active": true,
    "effectiveTime": "20020131",
    "moduleId": "900000000000207008",
    "iconId": "disorder",
    "score": 3.9305625,
    "memberOf": [ "447562003", "733073007", "900000000000497000" ],
    "activeMemberOf": [ "447562003", "733073007", "900000000000497000" ],
    "definitionStatus": {
      "id": "900000000000074008"
    },
    "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES",
    "pt": {
      "id": "178783019",
      "term" : "Ménière's disease",
      "concept": {
        "id": "13445001"
      },
      "type": {
        "id": "900000000000013009"
      },
      "typeId": "900000000000013009",
      "conceptId": "13445001",
      "acceptability": {
        "900000000000509007": "PREFERRED",
        "900000000000508004": "PREFERRED"
      }
    },
    "ancestorIds": [ "-1", "20425006", ..., "1279550006" ],
    "parentIds": [ "50438001" ],
    "statedAncestorIds": [ "-1", "64572001", "138875005", "404684003" ],
    "statedParentIds": [ "50438001" ],
    "definitionStatusId": "900000000000074008"
  }, {
    ...
  } ],
  "searchAfter": "AoIFQAd5LmITGo8wMTA4OTA5MTAwMDExOTEwNQ==",
  "limit": 50,
  "total": 27
}

The total number of matching concepts is shown in the property named total.

Now that SNOMED CT content is present in the code system (identified by the unique id SNOMEDCT) it is time to take a deeper dive. A frequent interaction is to retrieve properties of a concept identified by its . To do so, execute the following command:

SNOMED CT Identifier

Configuration

Quick Start

This guide shows you how to quickly set up a Snow Owl deployment using docker to store, search, and edit any healthcare terminology data. Start here if you are interested in evaluating its core features.

TL;DR

Here is how to deploy Snow Owl in less than a minute:

Prerequisites

To initiate a Snow Owl deployment, the only requirements are:

  • a terminal

Supported architectures

Start your deployment

Once the clone is finished find the directory containing the compose example:

While in this directory start the services using docker compose:

The service snowowl listens on localhost:8080 while it talks to the elasticsearch service over an isolated Docker network.

To stop the application, type docker compose down. Data volumes/mounts will remain on disk, so it's possible to start the application again with the same data using docker compose up.

A default user is configured to experiment with features that would require authentication and authorization. The username is test and its password is the same.

Here is the full content of the compose file:

Change memory settings

Reducing the memory settings of the docker stack is feasible when Snow Owl is assessed with limited terminologies and basic operations such as term searches. An applicable minimum value should be no less than 2 GB for each service.

The memory settings of Elasticsearch can be changed by adapting the following line in the docker-compose.yml file to e.g.:

The memory settings of Snow Owl can be changed by adapting the following line in the docker-compose.yml file to e.g.:

Check the healthiness of the service

Snow Owl's status is exposed via a health REST API endpoint. To see if everything went well, run the following command:

The expected response is

The response contains the installed version along with a list of repositories, their overall health (eg. "snomed" with health "GREEN"), and their associated indices and status (eg. "snomed-relationship" with status "GREEN").

The open-source version of Snow Owl can be downloaded as a docker image. The list of all published tags and additional details about the image can be found in Snow Owl's public .

the (see )

(optional, see )

Setting up a fully operational Snow Owl server depends on which architecture is supported by the Docker Engine. Thankfully it has such as Linux, Windows, and Mac.

There is a preassembled in Snow Owl's GitHub repository. This set of files can be used to start up a Snow Owl terminology server and its corresponding data layer, an Elasticsearch instance.

To get ahold of the necessary files it is required to either download the repository content (see instructions ) or clone the git repository using the git command line tool:

The example configuration will allocate 6 GB of memory for Elasticsearch and another 6 GB for Snow Owl. These settings are required if all features of Snow Owl are to be tested. To change these values see the instructions .

Release package
Folder structure
Get SSL certificate (optional)
Preload dataset (optional)
Configure Elastic Cloud (optional)
System settings
Spin up the service
git clone https://github.com/b2ihealthcare/snow-owl.git
cd ./snow-owl/docker/compose
docker compose up -d
docker-compose.yml
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:8.11.1
    container_name: elasticsearch
    environment:
      - "ES_JAVA_OPTS=-Xms6g -Xmx6g"
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    volumes:
      - es-data:/usr/share/elasticsearch/data
      - ./config/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
      - ./config/elasticsearch/synonym.txt:/usr/share/elasticsearch/config/analysis/synonym.txt
    healthcheck:
      test: curl --fail http://localhost:9200/_cluster/health?wait_for_status=green || exit 1
      interval: 1s
      timeout: 1s
      retries: 60
    ports:
      - "127.0.0.1:9200:9200"
    restart: unless-stopped
  snowowl:
    image: b2ihealthcare/snow-owl-oss:latest
    container_name: snowowl
    environment:
      - "SO_JAVA_OPTS=-Xms6g -Xmx6g"
      - "ELASTICSEARCH_URL=http://elasticsearch:9200"
    depends_on:
      elasticsearch:
        condition: service_healthy
    volumes:
      - ./config/snowowl/snowowl.yml:/etc/snowowl/snowowl.yml
      - ./config/snowowl/users:/etc/snowowl/users # default username and password: test - test
      - es-data:/var/lib/snowowl/resources/indexes
    ports:
      - "8080:8080"
    restart: unless-stopped

volumes:
  es-data:
    driver: local
- "ES_JAVA_OPTS=-Xms2g -Xmx2g"
- "SO_JAVA_OPTS=-Xms2g -Xmx2g"
curl http://localhost:8080/snowowl/info?pretty
{
  "version": "<version number>",
  "description": "You Know, for Terminologies",
  "repositories": {
    "items": [ {
      "id": "snomed",
      "health": "GREEN",
      "diagnosis": "",
      "indices" : [ {
        "index": "snomed-relationship",
        "status": "GREEN"
      }, {
        "index": "snomed-commit",
        "status": "GREEN"
      }, ...
    } ]
  }
}
Docker Hub repository
Docker Engine
install guide
git
install guide
a wide variety of platforms
docker compose configuration
here
below

Find concepts using ECL

In this example, we list the direct descendants of the root concept using the ECL expression <!138875005 (via the ecl query parameter), and also limit the result set to a single item using the limit query parameter:

curl -u "test:test" 'http://localhost:8080/snowowl/snomedct/SNOMEDCT/concepts?ecl=%3C!138875005&limit=1&pretty'

As no query parameter in this request would make Snow Owl differentiate between "better" and "worse" results (eg. a search term to match), concepts in the response will be sorted by identifier.

The item returned is, indeed, one of the top-level concepts in SNOMED CT: 105590001 |Substance|

{
  "items": [ {
    "id": "105590001",
    "released": true,
    "active": true,
    "effectiveTime": "20020131",
    "moduleId": "900000000000207008",
    "iconId": "substance",
    "score": 0.0,
    "memberOf": [ "723560006", "733073007", "900000000000497000" ],
    "activeMemberOf": [ "723560006", "733073007", "900000000000497000" ],
    "definitionStatus": {
      "id": "900000000000074008"
    },
    "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES",
    "ancestorIds": [ "-1" ],
    "parentIds": [ "138875005" ],
    "statedAncestorIds": [ "-1" ],
    "statedParentIds": [ "138875005" ],
    "definitionStatusId": "900000000000074008"
  } ],
  "searchAfter": "AoEpMTA1NTkwMDAx",
  "limit": 1,
  "total": 19
}

The number 19 in property total suggests that additional matches exist that were not included in the response this time.

Software requirements

Operating System

Here is the list of distributions that we suggest in the order of recommendation:

  • Ubuntu LTS releases

  • Debian LTS releases

  • CentOS 7 (deprecated)

It is possible to install the server release package on other distributions but bear in mind that there might be limitations.

Software packages

Before starting the production deployment of the Terminology Server make sure that the following packages are installed and configured properly:

  • Docker Engine

  • ability to execute bash scripts

Firewall

In case a reverse proxy is used, the Terminology Server requires two ports to be opened either towards the intranet or the internet (depending on usage):

  • http: port 80

  • https: port 443

In case there is no reverse proxy installed, the following port must be opened to be able to access the server's REST API:

  • http: port 8080

Preload dataset (optional)

In certain cases, a pre-built dataset is also shipped together with the Terminology Server. This is to ease the initial setup procedure and get going fast.

This method is only applicable to deployments where the Elasticsearch cluster is co-located with the Terminology Server.

To load data into a managed Elasticsearch cluster, there are several options:

  • use Snow Owl to rebuild the data to the remote cluster

These datasets are the compressed form of the Elasticsearch data folder which follows the same structure. Except for having a top folder called indexes . This is the same folder as in ./snow-owl/resources/indexes . So to be able to load the dataset one should just extract the contents of the dataset archive to this path.

Make sure to validate the file ownership of the indexes folder after decompression. Elasticsearch requires UID=1000 and GID=0 to be set for its data folder.

Plan your deployment

Setup and Administration

This guide contains all the necessary details for installing the Snow Owl Terminology Server in a production environment. The following sections will guide you on how to:

Import SNOMED CT

Curl will display the entire interaction between it and the server, including many request and response headers. We are interested in these two (response) rows in particular:

The first one indicates that the file was uploaded successfully and a resource has been created to track the progress of the import job, while the second row indicates the location of this resource.

Depending on the size and type of the RF2 package, hardware, and Snow Owl configuration, RF2 imports might take a few hours to complete (but usually less). Official SNAPSHOT distributions can be imported in less than 30 minutes by allocating 6 GB of heap size to Snow Owl and configuring it to use a solid-state disk for the data directory.

The process itself is asynchronous and its status can be checked by periodically sending a GET request to the location returned in the response header:

The expected response while the import is running:

Upon completion, you should receive a different response that lists component identifiers visited during the import as well as any defects encountered in uploaded release files:

Create your first Resource

Snow Owl is now running but does not contain any content whatsoever. To be able to import or author terminology data a resource has to be created beforehand. There are three major resource types in the system:

  • Code Systems (e.g. SNOMED CT, ATC, LOINC, ICD-10)

  • Value Sets

  • Concept Maps

For the sake of this quick start guide, we will follow along the path of how to create a SNOMED CT code system, import content and query concepts based on different criteria.

Create a Code System

If we take a look at eg. the list of known code systems, we get an empty result set:

To import SNOMED CT content, we have to create a code system first using the following request:

The request body includes:

  • The code system identifier (SNOMEDCT)

  • Various pieces of metadata offering a human-readable title, status, contact information, URL and OID for identification, etc.

  • The tooling identifier snomed that points to the repository that will store content

  • Additional code system settings stored as key-value pairs

If the request succeeds the server returns a "204 No Content" response. We can verify that the code system has been registered correctly with the following request:

The expected response is:

In addition to the submitted values, you will find that additional administrative properties also appear in the output. One example is branchPath which specifies the working branch of the code system within the repository.

The code system now exists but is empty. To verify this claim, we can list concepts using either Snow Owl's native API tailored for SNOMED CT or the standardized FHIR API for a representation that is uniform across different kinds of code systems – for the sake of simplicity, we will use the former in this example.

The following request can be used to list all available concepts in a SNOMED CT code system:

The expected response is:

At this point, we can either import or create content in the SNOMED CT code system. Follow the instructions on the next page to import your SNOMED CT RF2 release into Snow Owl.

One of Snow Owl's powerful features is the ability to list concepts matching a user-specified query expression using SNOMED International's Expression Constraint Language (ECL) syntax. If you would like to know more about the language itself, visit on the official site.

The Terminology Server is recommended to be installed on x86_64 / amd64 Linux operating systems where Docker Engine is available. See by Docker.

use

use

Let's import an archive using its SNAPSHOT content (see release types ) so that we can further explore the available SNOMED CT APIs. To start the import process, send the following request:

Use of SNOMED CT is subject to additional conditions not listed here, and the full copyright notice has been shortened for brevity in the request above. Please see for details.

the documentation
the list of supported architectures
docker compose
tar --extract \
    --gzip \
    --verbose \
    --same-owner \
    --preserve-permissions \
    --file=snow-owl-resources.tar.gz \
    --directory=/opt/snow-owl/resources/

chown -R 1000:0 /opt/snow-owl/resources
curl -v -u "test:test" http://localhost:8080/snowowl/snomedct/SNOMEDCT/import?type=snapshot\&createVersions=false \
-F file=@SnomedCT_InternationalRF2_PRODUCTION.zip
< HTTP/1.1 201 Created
< Location: http://localhost:8080/snowowl/snomedct/SNOMEDCT/import/107f6efa69886bfdd73db5586dcf0e15f738efed
curl -u "test:test" http://localhost:8080/snowowl/snomedct/SNOMEDCT/import/107f6efa69886bfdd73db5586dcf0e15f738efed?pretty
{
  "id": "107f6efa69886bfdd73db5586dcf0e15f738efed",
  "status": "RUNNING"
}
{
  "id": "107f6efa69886bfdd73db5586dcf0e15f738efed",
  "status": "FINISHED",
  "response": {
    "visitedComponents": [ ... ],
    "defects": [ ],
    "success": true
  }
}
curl -u "test:test" http://localhost:8080/snowowl/codesystems?pretty
{
  "items" : [ ],
  "limit" : 0,
  "total" : 0
}
curl -X POST \
-u "test:test" \
-H "Content-type: application/json" \
http://localhost:8080/snowowl/codesystems \
-d '{
  "id": "SNOMEDCT",
  "url": "http://snomed.info/sct/900000000000207008",
  "title": "SNOMED CT International Edition",
  "description": "SNOMED CT International Edition",
  "status": "active",
  "copyright": "(C) 2023 International Health Terminology Standards Development Organisation 2002-2023. All rights reserved.",
  "contact": "https://snomed.org",
  "oid": "2.16.840.1.113883.6.96",
  "toolingId": "snomed",
  "settings": {
    "moduleIds": [
      "900000000000207008",
      "900000000000012004"
    ],
    "locales": [
      "en-x-900000000000508004",
      "en-x-900000000000509007"
    ],
    "languages": [
      {
        "languageTag": "en",
        "languageRefSetIds": [
          "900000000000509007",
          "900000000000508004"
        ]
      },
      {
        "languageTag": "en-us",
        "languageRefSetIds": [
          "900000000000509007"
        ]
      },
      {
        "languageTag": "en-gb",
        "languageRefSetIds": [
          "900000000000508004"
        ]
      }
    ],
    "publisher": "SNOMED International",
    "namespace": "373872000",
    "maintainerType": "SNOMED_INTERNATIONAL"
  }
}'
curl -u "test:test" http://localhost:8080/snowowl/codesystems/SNOMEDCT?pretty
{
  "id": "SNOMEDCT",
  "url": "http://snomed.info/sct/900000000000207008",
  "title": "SNOMED CT International Edition",
  "language": "en",
  ...
  "branchPath": "MAIN/SNOMEDCT",
  ...
}
curl -u "test:test" http://localhost:8080/snowowl/snomedct/SNOMEDCT/concepts?pretty
{
  "items": [ ],
  "limit": 50,
  "total": 0
}
cross-cluster replication
snapshot-restore
Technology stack
Hardware requirements
Software requirements
Select the appropriate hardware and software environment to host the service
Download, install, and configure the entire technology stack necessary for operating the server
Handle release packages to upgrade to a newer version
Perform a data backup or a restore
Manage user access
Install Snow Owl using advanced methods
Apply advanced configuration options
RF2
here
here

Backup and restore

System settings

Some settings may require attention before moving to production. While the steps below may not necessitate any action, there are cases where the host running both Snow Owl and Elasticsearch will require fine-tuning.

Configure your host for Elasticsearch requirements

Some system-level settings need to be checked before deploying your own Elasticsearch in production.

Configure your host for Snow Owl requirements

Set permissions for folders and files appropriately

By default, Snow Owl runs inside the container as user snowowl using uid:gid 1000:1000.

  • If you are bind-mounting a local directory or file:

    • ensure it is readable by the user mentioned above

    • ensure that settings, data and log directories are writable as well

A good strategy is to grant group access to gid 1000 or 0 for the local directory.

Bind-mount Snow Owl's temporary working folder

In case the file system of the docker service on the host is different from what the Snow Owl deployment uses, it could be worthwhile to bind-mount Snow Owl's temporary working folder to a path that has excellent I/O performance. E.g.:

  • the root file system / is backed by a block storage that purposefully has lower I/O performance, this is the file system used by the docker service.

  • the deployment folder /opt/snow-owl is backed by a fast local SSD

The definition of the snowowl service in the docker compose file should be amended like this:

Following there is an extensive guide on what to verify, but usually, it comes down to these items:

Backup
Restore
snowowl:
    image: b2ihealthcare/snow-owl-<variant>:<version>
    ...
    volumes:
      - ./config/snowowl/snowowl.yml:/etc/snowowl/snowowl.yml
      - ./config/snowowl/users:/etc/snowowl/users
      - ${SNOWOWL_DATA_FOLDER}:/var/lib/snowowl
      - ${SNOWOWL_LOG_FOLDER}:/var/log/snowowl
+     - /path/to/folder/with/fast/performance:/usr/share/snowowl/work
    ports:
    ...
this link
Set vm.max_map_count to at least 262144
Increase ulimits for nofile and nproc
Disable swapping

Release package

Terminology Server releases are shared with customers through custom download URLs. The downloaded artifact is a Linux (tar.gz) archive that contains:

  • an initial folder structure

  • the configuration files for all services

  • a docker-compose.yml file that brings together the entire technology stack to run and manage the service

  • the credentials required to pull our proprietary docker images

As a best practice, it is advised to extract the content of the archive under /opt. So the deployment folder will be /opt/snow-owl. The docker-compose setup will rely on this path, however, if required it can be changed by editing the ./snow-owl/docker/.env file later on (see DEPLOYMENT_FOLDER environment variable).

When decompressing the archive it is important to use the --same-owner and --preserve-permissions options so the docker containers can access the files and folders appropriately.

The next page will describe the content of the release package in more detail.

tar --extract \
    --gzip \
    --verbose \
    --same-owner \
    --preserve-permissions \
    --file=/path/to/snow-owl-linux-x86_64.tar.gz \
    --directory=/opt/

Hardware requirements

Snow Owl with a co-located Elasticsearch cluster

For installations where Snow Owl and Elasticsearch are co-located, we recommend the following hardware specifications:

Snow Owl & co-located ES
Cloud
Dedicated

vCPU

8

8

Memory

32 GB

32 GB

I/O performance

>= 5000 IOPS SSD

>= 5000 IOPS SSD

Disk space

200 GB

200 GB

Snow Owl with a managed Elasticsearch cluster

Snow Owl
Cloud
Dedicated

vCPU

8 (compute optimized)

8

Memory

16 GB

16 GB

I/O performance

OS: balanced disk

TS file storage: local SSD

OS: HDD / SSD

TS file storage: SSD

Disk space

OS: 20 GB

TS file storage: 100 GB

OS: 20 GB

TS file storage: 100 GB

Elasticsearch @ elastic.co

vCPU

8 (compute optimized)

Memory

4 GB

I/O performance

handled by elastic.co

Disk space

180 GB

In case Snow Owl is planned to be used with resource-intensive workloads (large code system upgrades, frequent classification of terminologies, bulk authoring) an 8 vCPU / 4 GB Elasticsearch cluster might not be sufficient. Consider increasing the size of the hosted Elasticsearch instance gradually, so that finding the sweet spot will be straightforward.

Cloud VMs

Here are a few examples of which Virtual Machine types could be used for hosting the Terminology Server at the three most popular Cloud providers (including but not limited to):

Cloud Provider
VM type

GCP

AWS

Azure

Configure Elastic Cloud (optional)

The release package contains everything that is required to use a co-located Elasticsearch instance by default. Only proceed with these steps if a remote Elasticsearch cluster is required.

To configure the Terminology Server to work with a managed Elasticsearch cluster two settings require attention.

Configure Terminology Server

First, the local Elasticsearch container and all its configurations should be removed from the docker-compose.yml file. Once that is done, we have to tell the Terminology Server where to find the cluster. This can be set in the file ./snow-owl/docker/configs/snowowl/snowowl.yml:

repository:
  index:
    socketTimeout: 60000
    clusterUrl: https://my-es-cluster.elastic-cloud.com:9243
    clusterUsername: my-es-cluster-user
    clusterPassword: my-es-cluster-pwd

Configure Elastic Cloud

The Snow Owl Terminology Server leverages Elasticssearch's synonym filters. To have this feature work properly with a managed Elasticsearch cluster our custom dictionary has to be uploaded and configured. The synonym file can be found in the release package under ./snow-owl/docker/configs/elasticsearch/synonym.txt. This file needs to be compressed as an zip archive by following this structure:

.
└── analysis
    └── synonym.txt

Once the bundle is configured and the cluster is up we can (re)start the docker stack. In case there are any troubles the Terminology Server will refuse to initialize and let you know what the problem is in its log files.

Get SSL certificate (optional)

Having secure HTTP in case the Terminology Server is a public-facing instance is definitely a must. For such cases, we are providing a pre-configured environment and a convenience script to acquire the necessary SSL certificate.

To be able to obtain an SSL certificate the following requirements must be met:

  • docker and docker compose are installed

  • the server instance has a public IP address

  • a DNS A record is configured for the desired domain name routing to the server's IP address

For the sake of example let's say the target domain name is snow-owl.b2ihealthcare.com .

Go to the sub-folder called ./snow-owl/docker/configs/cert. Make sure the init-certificate.sh script has permission to be executable and get some details about its parameters:

[root@host]# pwd
/opt/snow-owl/docker/configs/cert

[root@host]# chmod +x init-certificate.sh
[root@host]# ./init-certificate.sh -h
  DESCRIPTION:

     Get certificate for the specified domain name using Let's Encrypt and certbot

  OPTIONS:
     -h
        Show this help
     -d domain
        Define the domain name to get the certificate for
     -e email (optional)
        The email address to use for the certificate registration

  EXAMPLES:

     ./init-certificate.sh -d mywebsite.com -e example@mail.com
     ./init-certificate.sh -d example.com

As you can see -d is used for specifying the domain name, and -e is used for specifying a contact email address (optional). Now execute the script with our example parameters:

Script execution will overwrite the files under ./snow-owl/docker/docker-compose.yml and ./snow-owl/docker/configs/nginx/nginx.conf. Make a note of any changes if required.

./init-certificate.sh -d snow-owl.b2ihealthcare.com -e domain@b2ihealthcare.com

After successful execution, a new folder is created ./snow-owl/cert which contains all the certificate files required by NGINX. The docker-compose.yml file is also amended with a piece of code that guarantees automatic renewal of the certificate:

  nginx:
    image: nginx:stable
    container_name: nginx
    volumes:
      - ./configs/nginx/conf.d/:/etc/nginx/conf.d/
      - ./configs/nginx/nginx.conf:/etc/nginx/nginx.conf
      - ${CERT_FOLDER}/conf:/etc/letsencrypt
      - ${CERT_FOLDER}/www:/var/www/certbot
    depends_on:
      - snowowl
    ports:
      - "80:80"
      - "443:443"
    # Reload nginx config every 6 hours and restart
    command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
    restart: unless-stopped
  certbot:
    image: certbot/certbot:latest
    container_name: certbot
    volumes:
      - ${CERT_FOLDER}/conf:/etc/letsencrypt
      - ${CERT_FOLDER}/www:/var/www/certbot
    # Check for SSL cert renewal every 12 hours
    entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
    restart: unless-stopped

At this point everything is prepared for having secure HTTP, let's see what else needs to be configured before spinning up the service.

Folder structure

Here is the list of files and folders extracted from the release package and their role are described below.

snow-owl/
├── backup
├── docker
│   ├── configs
│   │   ├── cert
│   │   │   ├── conf.d
│   │   │   ├── docker-compose-cert.yml
│   │   │   ├── docker-compose.yml
│   │   │   ├── init-certificate.sh
│   │   │   └── nginx.conf
│   │   ├── elasticsearch
│   │   │   ├── elasticsearch.yml
│   │   │   └── synonym.txt
│   │   ├── ldap-bootstrap
│   │   │   ├── 100_groups.ldif
│   │   │   └── 200_users.ldif
│   │   ├── nginx
│   │   │   ├── conf.d
│   │   │   │   └── snowowl.conf
│   │   │   └── nginx.conf
│   │   └── snowowl
│   │       ├── snowowl.yml
│   │       └── users
│   ├── docker-compose.yml
│   ├── docker_login.txt
│   └── .env
├── ldap
├── logs
└── resources
    ├── attachments 
    └── indexes

/docker

Contains every configuration file used for the docker stack, including docker-compose.yml.

In Docker, this directory serves as the context, implying that when executing commands, one needs to either explicitly reference the configuration file or run docker compose commands directly within this directory.

E.g. to verify the status of the stack there are two approaches:

Execute the command inside ./snow-owl/docker:

[root@host docker]# docker compose ps -a

Execute the command from somewhere else then ./snow-owl/docker:

[root@host ~]# docker compose --file /opt/snow-owl/docker/docker-compose.yml ps -a

/docker/configs/cert

This folder contains the files necessary to acquire an SSL certificate. None of the files should be changed here ideally.

/docker/configs/elasticsearch

There is one important file here, elasticsearch.yml which can be used for fine-tuning the Elasticsearch cluster. However, this is not necessary by default, only if an advanced configuration is required.

/docker/configs/ldap-bootstrap

This folder contains the files used upon the first start of the OpenLDAP server. The files within describe a set of groups and users to set up an initial user access model. User credentials for the test users can be found in the file called 200_users.ldif.

/docker/configs/nginx

Location of all configuration files for NGINX. By default, a non-secure HTTP configuration is assumed. If there is no need for an SSL certificate, then the files here will be used. If an SSL certificate was acquired, then the main configuration file of NGINX (nginx.conf) will be overwritten with the one under /docker/cert/nginx.conf.

/docker/configs/snowowl

snowowl.yml: this file is the default configuration file of the Terminology Server. It does not need any changes by default either.

users: list of users for file-based authentication. There is one default user called snowowl for which the credentials can be found under ./docker/.env.

/docker/docker-compose.yml

The main configuration file for the docker stack. This file is replaced in case an SSL certificate was acquired (with file /docker/cert/docker-compose.yml). This is where volumes, ports, or environment variables can be configured.

/docker/docker_login.txt

The credentials to use for authenticating with the B2i private docker registry.

/docker/.env

The collection of environment variables for the docker-compose.yml file.

This is the file to configure most of the settings of the Terminology Server. Including java heap size, Snow Owl or Elasticsearch version, passwords, or folder structure.

/ldap

The location where the OpenLDAP server stores its data.

/logs

Log files of the Terminology Server

/resources

Location of Elasticsearch and Snow Owl resources.

/resources/indexes

This directory serves as the data folder for Elasticsearch, where datasets should be extracted.

/resources/attachments

Snow Owl's local file storage. Import and export artifacts are stored here.

/cert (optional)

In case an SSL certificate is acquired, all the files used by certbot and NGINX are stored here. This folder is automatically created by the certificate retrieval script.

/backup (optional)

This is the initial folder of all backup artifacts. This should be configured as a network mount to achieve data redundancy.

Technology stack

The technology stack behind the Terminology Server consists of the following components:

  • The Terminology Server application

  • Elasticsearch as the data layer

  • Optional: Authentication/Authorization service

    • Either an OpenID Connect/OAuth2.0 compatible external service with JSON Web Token support

    • Or an LDAP-compliant directory service

  • Optional: A reverse proxy handling the requests towards the REST API

Terminology Server

Outgoing communication from the Terminology Server goes via:

  • HTTP(s) towards Elasticsearch and to the external OpenID Connect/OAuth2 authorization server

  • LDAP(s) towards the A&A service

Incoming communication is handled through the HTTP port 8080.

A selected reverse proxy channels all incoming traffic through to the Terminology Server.

Elasticsearch

Elasticsearch versions supported by each major version of Snow Owl:

The Elasticsearch cluster can either be:

  • a co-located, single-node, self-hosted cluster

A&A service

Reverse proxy

With a preconfigured domain name and DNS record, the default installation package can take care of requesting and maintaining the necessary certificates for secure HTTP. See the details of this in the Configuration section.

For simplifying the initial setup process we are shipping the Terminology Server with a default configuration of a co-located Elasticsearch cluster, a pre-populated OpenLDAP server, and an NGINX reverse proxy with the ability to opt-in for an SSL certificate.

For installations where Snow Owl connects to a managed Elasticsearch cluster at we recommend the following hardware specifications:

For the managed Elasticsearch instance this zip file needs to be configured as a bundle extension. The steps required are covered in in great detail.

SSL certificate retrieval and renewal are managed by , the official ACME client recommended by .

Pro tip: in case the Terminology Server is deployed to the cloud, make sure this path is served by a fast SSD disk (local or ephemeral SSD is the best). This will make import or export processes even faster.

Snow Owl 7.x
Snow Owl 8.x
Snow Owl 9.x

a managed Elasticsearch cluster hosted by

Having a co-located Elasticsearch service next to the Terminology Server directly impacts the hardware requirements. See our list of recommended hardware on the .

For authorization and authentication, the application supports external OpenID Connect/OAuth2 compatible authorization services (eg. Auth0) and any traditional LDAP Directory Servers. We recommend starting with and evolving to other solutions later because it is easy to set up and maintain while keeping Snow Owl's user data isolated from any other A&A services.

A reverse proxy, such as is recommended to be utilized between the Terminology Server and either the intranet or the internet. This will increase security and help with channeling REST API requests appropriately.

💡
elastic.co
this guide
certbot
Let's Encrypt
c2d-highcpu-8
c5d-2xlarge
F8s v2
elastic.co
next page
OpenLDAP
NGINX

Elasticsearch 7.x

Elasticsearch 8.x

(deprecated)

✔️
✔️
✔️
✖️
✔️
✔️

Upgrade Snow Owl

When a new Snow Owl Terminology Server release is available we recommend performing the following steps.

New releases are going to be distributed the same way: a docker stack and its configuration within an archive.

It is advised to decompress the new release files to a temporary folder and compare the contents of ./snow-owl/docker .

[root@host]# diff /opt/snow-owl/docker/ /opt/new-snow-owl-release/snow-owl/docker/
Common subdirectories: /opt/snow-owl/docker/configs and /opt/new-snow-owl-release/snow-owl/docker/configs
diff /opt/snow-owl/docker/.env /opt/new-snow-owl-release/snow-owl/docker/.env
10c10
< ELASTICSEARCH_VERSION=7.16.3
---
> ELASTICSEARCH_VERSION=7.17.1
24c24
< SNOWOWL_VERSION=8.1.0
---
> SNOWOWL_VERSION=8.1.1

The changes usually are restricted to version numbers in the .env file. In such cases, it is equally acceptable to overwrite the contents of the ./snow-owl/docker folder as is or cherry-pick the necessary modifications by hand.

Once the new version of the files is in place it is sufficient to just issue the following commands, an explicit stop of the service is not even required (in the folder ./snow-owl/docker):

docker compose pull
docker compose up -d

Do not usedocker compose restart because it won't pick up any .yml or .env file changes. See the explanation in the .

official Docker guide

Restore

Using the custom backup container it is possible to restore:

  • the Elasticsearch indices

  • the OpenLDAP database (if present)

To restore any of the data the following steps have to be performed:

  • stop Snow Owl, Elasticsearch, and the OpenLDAP containers (in the folder ./snow-owl/docker):

docker compose stop snowowl elasticsearch ldap
  • (re)move the contents of the old Elasticsearch data folder:

mv -t /tmp ./snow-owl/resources/indexes/nodes
  • restart the Elasticsearch container only (keep Snow Owl stopped):

docker compose start elasticsearch
  • use the backup container's terminal and execute the restore script:

    • without any parameters, if only the Elasticsearch indices have to be restored

    root@host:/# docker exec -it backup bash
    root@ad36cfb0448c:/# /backup/restore.sh
    • with parameter -l in case the Elasticsearch indices and the OpenLDAP database have to be restored at the same time

    root@host:/# docker exec -it backup bash
    root@ad36cfb0448c:/# /backup/restore.sh -l
  • the script will list all available backups and prompts for selection:

root@ad36cfb0448c:/# /backup/restore.sh

################################
Snow Owl restore script STARTED.

#### Verify Elasticsearch snapshot repository ####

Checking existence of repository 'snowowl-snapshots' ...
Repository with name 'snowowl-snapshots' is present, verifying repository state ...
Repository 'snowowl-snapshots' is functional

#### Select backup to restore ####

Found 10 available backups under '/backup'
Please select the backup to restore by choosing the right number in the menu below (hit Enter when the selection was made)

 1) snowowl-daily-20220323030001
 2) snowowl-daily-20220324030001
 3) snowowl-daily-20220325030002
 4) snowowl-daily-20220326030002
 5) snowowl-daily-20220329030001
 6) snowowl-daily-20220330030001
 7) snowowl-daily-20220331030002
 8) snowowl-daily-20220401030002
 9) snowowl-daily-20220402030001
10) snowowl-daily-20220405030002

#?
  • enter the numerical identifier of the backup to restore and wait until the process finishes

  • exit the backup container and restart all containers:

root@ad36cfb0448c:/# exit
root@host:/# docker compose up -d

In case only the contents of the OpenLDAP server have to be restored, it is sufficient to just extract the contents of the backup archive to ./snow-owl/ldap and restart the container.

Advanced installation methods

This section includes information on how to set up Snow Owl and get it running using standalone installation methods

Java (JVM) Version

Snow Owl is built using Java and requires at least Java 17 to run. Only Oracle’s Java and the OpenJDK are supported.

We recommend installing the latest version in the Java 17 release series. We recommend using a supported LTS version of Java.

The version of Java that Snow Owl will use can be configured by setting the JAVA_HOME environment variable.

Backup

This method is only applicable to deployments where the Elasticsearch cluster is co-located with the Snow Owl Terminology Server.

The Terminology Server release package contains a built-in solution to perform rolling and permanent data backups. The docker stack has a specialized container (called snow-owl-backup) that is responsible for creating scheduled backups of:

  • the Elasticsearch indices

  • the OpenLDAP database (if present)

The OpenLDAP database is backed up by compressing the contents of the folder under ./snow-owl/ldap. Filenames are generated using the name of the corresponding Elasticsearch snapshot. E.g. snowowl-daily-20220324030001.tar.gz.

Backup Window: when a backup operation is running the Terminology Server blocks all write operations on the Elasticsearch indices. This is to prevent data loss and have consistent backups.

Backup Duration: the very first backup of an Elasticsearch cluster takes a bit more time (depending on the size and I/O performance but between 20 minutes - 40 minutes), subsequent backups should take significantly less: 1 - 5 minutes.

Daily backups

Daily backups are rolling backups, scheduled, and cleaned up based on the settings specified in the ./snow-owl/docker/.env file. Here is a summary of the important settings that could be changed.

BACKUP_FOLDER

To store backups redundantly it is advised to mount a remote file share to a local path on the host. By default, this folder is configured to be at ./snow-owl/backup. It contains:

  • the snapshot files of the Elasticsearch cluster

  • the backup files of the OpenLDAP database

  • extra configuration files

Make sure the remote file share has enough free space to store around double the ./snow-owl/resources/indexes folder.

CRON_DAYS, CRON_HOURS, CRON_MINUTES

Backup jobs are scheduled by cron, so cron expressions can be defined here to specify the time a daily backup should happen.

NUMBER_OF_DAILY_BACKUPS_TO_KEEP

This is used to tell the backup container how many daily backups must be kept.

Example daily backup config

Let's say we have an external file share mounted to /mnt/external_folder. There is a need to create daily backups after each working day, during the night at 2:00 am. Only the last two-weeks-worth of data should be kept (assuming 5 working days each week).

BACKUP_FOLDER=/mnt/external_folder
NUMBER_OF_DAILY_BACKUPS_TO_KEEP=10

CRON_DAYS=Tue-Sat
CRON_HOURS=2
CRON_MINUTES=0

One-off backups

It is also possible to perform backups occasionally, e.g. before versioning an important SNOMED CT release or before a Terminology Server version upgrade. These backups are kept until manually removed.

To create such backups the following command needs to be executed using the backup container's terminal:

root@host:/# docker exec -it backup bash
root@ad36cfb0448c:/# /backup/backup.sh -l my-backup-label

The script will create a snapshot backup of the Elasticsearch data with a label snowowl-my-backup-label-20220405030002 and an archive that contains the database of the OpenLDAP server with the name snowowl-my-backup-label-20220405030002.tar.gz.

User management

The Snow Owl Terminology Server employs two distinct methods for user management. The primary authentication and authorization service is the LDAP Directory Server, while a secondary option is a file-based database strictly utilized for administrative purposes. The following methods can be applied when granting or revoking user access.

LDAP-based identity provider

This is only applicable to the default deployment setup where a co-located OpenLDAP server is used alongside the Terminology Server.

Apache Directory Studio is an open-source, free application. It is available for different platforms (Windows, macOS, and Linux).

The OpenLDAP server uses port 389 for communication. This is the port that needs to be tunneled through the SSH connection. Here is what the final configuration looks like in PuTTY:

Once the SSH tunnel works, it's time to set up our connection in Apache DS. Go to File -> New -> LDAP Connection and set the following:

Hit the "Check Network Parameter" button to verify the network connection.

Go to the next page of the wizard and provide your credentials. The default Bind DN and Bind password can be found in the Terminology Server release package under ./snow-owl/docker/.env.

Hit the "Check Authentication" button to verify your credentials. Hit Finish to complete the setup procedure.

All users and groups should be browseable now through the LDAP Browser view:

Grant user access

To grant access to a new user an LDAP entry has to be created. Go to the LDAP Browse view and right-click on the organization node, then New -> New Entry:

It is the easiest to use an existing entry as a template:

Leave everything as is on the Object Classes page, then hit Next. Fill in the new user's credentials:

On the final page, double-click on the userPassword row and provide the user's password:

Hit Finish to add the user to the database.

Now we need to assign a role for the user. Before going forward, get ahold of the user's DN using the LDAP Browser view:

Select the desired role group in the Browser view and add a new attribute:

Select the attribute type uniqueMember and hit Finish:

Paste the user's DN as the value of the attribute and hit Enter to make your changes permanent:

Revoke user access

To revoke access the user has to be deleted from the list of users:

And also has to be removed from the role group:

Change credentials

To change either the first or last name, or the password of a user, just edit any of the attributes in the user editor:

File-based identity provider

There is a configuration file ./snow-owl/docker/configs/snowowl/users that contains the list of users with their credentials encrypted. This method of authentication should be used for testing or internal purposes only, users added here will have elevated privileges.

To apply any changes made to the users file the Terminology Server has to be restarted afterward.

Grant user access

To grant access the users file has to be amended with the new user and its credentials. There are several ways to encrypt a password but here is one that is easy and available on most of the Linux variants. The package called htpasswd has to be installed:

htpasswd -nBC 10 my-new-username | head -n1 | sed 's/$2y/$2a/g' >> ./snow-owl/docker/configs/snowowl/users

It will prompt for the password and will amend the file with the new user at the end.

Revoke user access

Simply remove the user's line from the file and restart the service.

Change credentials

System configuration

Ideally, Snow Owl should run alone on a server and use all of the resources available to it. To do so, you need to configure your operating system to allow the user running Snow Owl to access more resources than allowed by default.

The following settings must be considered before going to production:

Configuring system settings

Where to configure systems settings depends on which package you have used to install Snow Owl, and which operating system you are using.

When using the .zip or .tar.gz packages, system settings can be configured:

When using the RPM or Debian packages, most system settings are set in the system configuration file. However, systems that use systemd require that system limits are specified in a systemd configuration file.

ulimit

On Linux systems, ulimit can be used to change resource limits temporarily. Limits usually need to be set as root before switching to the user that will run Snow Owl. For example, to set the number of open file handles (ulimit -n) to 65,536, you can do the following:

The new limit is only applied during the current session.

You can consult all currently applied limits with ulimit -a.

/etc/security/limits.conf

On Linux systems, persistent limits can be set for a particular user by editing the /etc/security/limits.conf file. To set the maximum number of open files for the snowowl user to 65,536, add the following line to the limits.conf file:

This change will only take effect the next time the snowowl user opens a new session.

Ubuntu and limits.conf

Ubuntu ignores the limits.conf file for processes started by init.d. To enable the limits.conf file, edit /etc/pam.d/su and uncomment the following line:

Sysconfig file

When using the RPM or Debian packages, system settings and environment variables can be specified in the system configuration file, which is located in:

However, for systems that use systemd, system limits need to be specified via systemd.

Systemd configuration

When using the RPM or Debian packages on systems that use systemd, system limits must be specified via systemd.

The systemd service file (/usr/lib/systemd/system/snowowl.service) contains the limits that are applied by default.

To override them, add a file called /etc/systemd/system/snowowl.service.d/override.conf (alternatively, you may run sudo systemctl edit snowowl which opens the file automatically inside your default editor). Set any changes in this file, such as:

Once finished, run the following command to reload units:

A managed Elasticsearch service will automatically configure a snapshot policy upon creation. See details .

For the Elasticsearch indices, the backup container uses the . Snapshots are labeled in a predefined format with timestamps. E.g. snowowl-daily-20220324030001

There are several ways to access and manage an OpenLDAP server, hereby we will only describe one of them, through the .

Before accessing the LDAP database there is one technical prerequisite to satisfy. The OpenLDAP server has to be accessible from the machine Apache Directory Studio is installed. The best and most secure way to achieve that is to set up an SSH tunnel. Follow to an article that describes how to configure an SSH tunnel using and Windows.

Remove the user's line from the file and regenerate the credentials according to the section.

temporarily with , or

permanently in .

here
Snapshot API
Apache Directory Studio
this link
PuTTY
Grant user access
sudo su # Become `root`
ulimit -n 65536 # Change the max number of open files
su snowowl # Become the `snowowl` user in order to start Snow Owl
snowowl  -  nofile  65536
# session    required   pam_limits.so

Package

Location

RPM

/etc/sysconfig/snowowl

Debian

/etc/default/snowowl

[Service]
LimitMEMLOCK=infinity
sudo systemctl daemon-reload
Disable swapping
Increase file descriptors
Ensure sufficient virtual memory
Ensure sufficient threads
ulimit
/etc/security/limits.conf

Using an archive

Snow Owl is provided as a .zip and as a .tar.gz package. These packages can be used to install Snow Owl on any system.

Download and install the zip package

The .zip archive for Snow Owl can be downloaded and installed as follows:

wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.zip
wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.zip.sha512
shasum -a 512 -c snowowl-oss-<version>.zip.sha512 # compares the SHA of the downloaded archive, should output: `snowowl-oss-<version>.zip: OK.`
unzip snowowl-oss-<version>.zip
cd snowowl-oss-<version>/ # This directory is known as `$SO_HOME`

Download and install the .tar.gz package

The .tar.gz archive for Snow Owl can be downloaded and installed as follows:

wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.tar.gz
wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.tar.gz.sha512
shasum -a 512 -c snowowl-oss-<version>.tar.gz.sha512 # compares the SHA of the downloaded archive, should output: `snowowl-oss-<version>.tar.gz: OK.` 
tar -xzf snowowl-oss-<version>.tar.gz
cd snowowl-oss-<version>/ # This directory is known as `$SO_HOME`

Running Snow Owl from the command line

Snow Owl can be started from the command line as follows:

./bin/startup

By default, Snow Owl runs in the foreground, prints its logs to the standard output (stdout), and can be stopped by pressing Ctrl-C.

Checking that Snow Owl is running

You can test that your instance is running by sending an HTTP request to Snow Owl's status endpoint:

curl http://localhost:8080/snowowl/info

which should give you a response like this:

{
  "version": "<version_number>",
  "description": "You Know, for Terminologies",
  "repositories": {
    "items": [
      {
        "id": "snomedStore",
        "health": "GREEN"
      }
    ]
  }
}

Running in the background

You can send the Snow Owl process to the background using a combination of nohup and the & character:

nohup ./bin/startup > /dev/null &

Log messages can be found in the $SO_HOME/serviceability/logs/ directory.

To shut down Snow Owl, you can kill the process ID directly:

kill <pid>

or using the provided shutdown script:

./bin/shutdown

Directory layout of .zip and .tar.gz archives:

The .zip and .tar.gz packages are entirely self-contained. All files and directories are, by default, contained within $SO_HOME — the directory created when unpacking the archive.

This is very convenient because you don’t have to create any directories to start using Snow Owl, and uninstalling Snow Owl is as easy as removing the $SO_HOME directory. However, it is advisable to change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on.

Type
Description
Default Location
Setting

home

Snow Owl home directory or $SO_HOME

Directory created by unpacking the archive

bin

Binary scripts including startup/shutdown to start/stop the instance

$SO_HOME/bin

conf

Configuration files including snowowl.yml

$SO_HOME/configuration

data

The location of the data files and resources.

$SO_HOME/resources

path.data

logs

Log files location.

$SO_HOME/serviceability/logs

The latest stable version of Snow Owl can be found on the page.

Snow Owl Releases

Number of threads

Snow Owl uses a number of thread pools for different types of operations. It is important that it is able to create new threads whenever needed. Make sure that the number of threads that the Snow Owl user can create is at least 4096.

This can be done by setting ulimit -u 4096 as root before starting Snow Owl, or by setting nproc to 4096 in /etc/security/limits.conf.

The package distributions when run as services under systemd will configure the number of threads for the Snow Owl process automatically. No additional configuration is required.

Install Snow Owl

Snow Owl is provided in the following package formats:

Using RPM

The RPM for Snow Owl can be downloaded from the Downloads section. It can be used to install Snow Owl on any RPM-based system such as OpenSuSE, SLES, CentOS, Red Hat, and Oracle Enterprise.

Download and install

On systemd-based distributions, the installation scripts will attempt to set kernel parameters (e.g., vm.max_map_count); you can skip this by masking the systemd-sysctl.service unit.

Running Snow Owl with SysV init

Use the chkconfig command to configure Snow Owl to start automatically when the system boots up:

Snow Owl can be started and stopped using the service command:

If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/.

Running Snow Owl with systemd

To configure Snow Owl to start automatically when the system boots up, run the following commands:

Snow Owl can be started and stopped as follows:

These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/.

Checking that Snow Owl is running

You can test that your Snow Owl instance is running by sending an HTTP request to:

which should give you a response something like this:

Configuring Snow Owl

Snow Owl defaults to using /etc/snowowl for runtime configuration. The ownership of this directory and all files in this directory are set to root:snowowl on package installation and the directory has the setgid flag set so that any files and subdirectories created under /etc/snowowl are created with this ownership as well (e.g., if a keystore is created using the keystore tool). It is expected that this be maintained so that the Snow Owl process can read the files under this directory via the group permissions.

Directory layout of RPM

The RPM places config files, logs, and the data directory in the appropriate locations for an RPM-based system:

Package

Description

zip/tar.gz

The zip and tar.gz packages are suitable for installation on any system and are the easiest choice for getting started with Snow Owl on most systems.

rpm

The rpm package is suitable for installation on Red Hat, Centos, SLES, OpenSuSE and other RPM-based systems. RPMs may be downloaded from the Downloads section.

deb

The deb package is suitable for Debian, Ubuntu, and other Debian-based systems. Debian packages may be downloaded from the Downloads section.

RPM install is not supported on distributions with old versions of RPM, such as SLES 11 and CentOS 5. Please see instead.

Snow Owl loads its configuration from the /etc/snowowl/snowowl.yml file by default. The format of this config file is explained in .

Type
Description
Default Location
Setting
wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.rpm
wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.rpm.sha512
shasum -a 512 -c snow-owl-oss-<version>.rpm.sha512 # Compares the SHA of the downloaded RPM and the published checksum, which should output `snow-owl-oss-<version>.rpm: OK`.
sudo rpm --install snow-owl-oss-<version>.rpm
sudo chkconfig --add snowowl
sudo -i service snowowl start
sudo -i service snowowl stop
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable snowowl.service
sudo systemctl start snowowl.service
sudo systemctl stop snowowl.service
curl http://localhost:8080/snowowl/info
{
  "version": "<version_number>",
  "description": "You Know, for Terminologies",
  "repositories": {
    "items": [
      {
        "id": "snomedStore",
        "health": "GREEN"
      }
    ]
  }
}

File descriptors

This is only relevant if you are running Snow Owl with an embedded Elasticsearch and not connecting it to an existing cluster.

Snow Owl (with embedded Elasticsearch) uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Snow Owl to 65,536 or higher.

For the .zip and .tar.gz packages, set ulimit -n 65536 as root before starting Snow Owl, or set nofile to 65536 in /etc/security/limits.conf.

RPM and Debian packages already default the maximum number of file descriptors to 65536 and do not require further configuration.

Install Snow Owl with tar.gz or zip
Install Snow Owl with RPM
Install Snow Owl with Debian Package
Install Snow Owl with .zip or .tar.gz
Configuring Snow Owl

home

Snow Owl home directory or $SO_HOME

/usr/share/snowowl

bin

Binary scripts including startup/shutdown to start/stop the instance

/usr/share/snowowl/bin

conf

Configuration files including snowowl.yml

/etc/snowowl

data

The location of the data files and resources.

/var/lib/snowowl

path.data

logs

Log files location.

/var/log/snowowl

Using DEB

The Debian package for Snow Owl can be downloaded from the Downloads section. It can be used to install Snow Owl on any Debian-based system such as Debian and Ubuntu.

Download and install

wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.deb
wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.deb.sha512
shasum -a 512 -c snow-owl-oss-<version>.deb.sha512 # Compares the SHA of the downloaded Debian package and the published checksum, which should output `snow-owl-oss-<version>.deb: OK`.
sudo dpkg -i snow-owl-oss-<version>.deb

Running Snow Owl with SysV init

Use the update-rc.d command to configure Snow Owl to start automatically when the system boots up:

sudo update-rc.d snowowl defaults 95 10

Snow Owl can be started and stopped using the service command:

sudo -i service snowowl start
sudo -i service snowowl stop

If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/.

Running Snow Owl with systemd

To configure Snow Owl to start automatically when the system boots up, run the following commands:

sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable snowowl.service

Snow Owl can be started and stopped as follows:

sudo systemctl start snowowl.service
sudo systemctl stop snowowl.service

These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/.

Checking that Snow Owl is running

You can test that your Snow Owl instance is running by sending an HTTP request to:

curl http://localhost:8080/snowowl/info

which should give you a response something like this:

{
  "version": "<version_number>",
  "description": "You Know, for Terminologies",
  "repositories": {
    "items": [
      {
        "id": "snomedStore",
        "health": "GREEN"
      }
    ]
  }
}

Configuring Snow Owl

Snow Owl defaults to using /etc/snowowl for runtime configuration. The ownership of this directory and all files in this directory are set to root:snowowl on package installation and the directory has the setgid flag set so that any files and subdirectories created under /etc/snowowl are created with this ownership as well (e.g., if a keystore is created using the keystore tool). It is expected that this be maintained so that the Snow Owl process can read the files under this directory via the group permissions.

NOTE: Distributions that use systemd require that system resource limits be configured via systemd rather than via the /etc/sysconfig/snowowl file.

Directory layout of Debian package

The Debian package places config files, logs, and the data directory in the appropriate locations for a Debian-based system:

Type
Description
Default Location
Setting

home

Snow Owl home directory or $SO_HOME

/usr/share/snowowl

bin

Binary scripts including startup/shutdown to start/stop the instance

/usr/share/snowowl/bin

conf

Configuration files including snowowl.yml

/etc/snowowl

data

The location of the data files and resources.

/var/lib/snowowl

path.data

logs

Log files location.

/var/log/snowowl

Start Snow Owl

The method for starting Snow Owl varies depending on how you installed it.

Archive packages (.tar.gz, .zip)

If you installed Snow Owl with a .tar.gz or zip package, you can start Snow Owl from the command line.

Running Snow Owl from the command line

Snow Owl can be started from the command line as follows:

By default, Snow Owl runs in the foreground, prints some of its logs to the standard output (stdout), and can be stopped by pressing Ctrl-C.

Running as a daemon

To run Snow Owl as a daemon, use the following command:

Log messages can be found in the $SO_HOME/serviceability/logs/ directory.

The startup scripts provided in the RPM and Debian packages take care of starting and stopping the Snow Owl process for you.

RPM packages

Snow Owl is not started automatically after installation. How to start and stop Snow Owl depends on whether your system uses SysV init or systemd (used by newer distributions). You can tell which is being used by running this command:

Running Snow Owl with SysV init

Use the chkconfig command to configure Snow Owl to start automatically when the system boots up:

Snow Owl can be started and stopped using the service command:

If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/.

Running Snow Owl with systemd

To configure Snow Owl to start automatically when the system boots up, run the following commands:

Snow Owl can be started and stopped as follows:

These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/.

Snow Owl loads its configuration from the /etc/snowowl/snowowl.yml file by default. The format of this config file is explained in .

Configuring Snow Owl
./bin/startup
nohup ./bin/startup > /dev/null &
ps -p 1
sudo chkconfig --add snowowl
sudo -i service snowowl start
sudo -i service snowowl stop
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable snowowl.service
sudo systemctl start snowowl.service
sudo systemctl stop snowowl.service

Disable swapping

Most operating systems try to use as much memory as possible for file system caches and eagerly swap out unused application memory. This can result in parts of the JVM heap or even its executable pages being swapped out to disk.

Swapping is very bad for performance, and should be avoided at all costs. It can cause garbage collections to last for minutes instead of milliseconds and can cause services to respond slowly or even time out.

There are two approaches to disabling swapping. The preferred option is to completely disable swap, but if this is not an option, you can minimize swappiness.

Disable all swap files

Usually Snow Owl is the only service running on a box, and its memory usage is controlled by the JVM options. There should be no need to have swap enabled.

On Linux systems, you can disable swap temporarily by running:

sudo swapoff -a

To disable it permanently, you will need to edit the /etc/fstab file and comment out any lines that contain the word swap.

Configure swappiness

Another option available on Linux systems is to ensure that the sysctl value vm.swappiness is set to 1. This reduces the kernel’s tendency to swap and should not lead to swapping under normal circumstances, while still allowing the whole system to swap in emergency conditions.

# sysctl settings, to be added to /etc/sysctl.conf or equivalent
vm.swappiness = 1
vm.max_map_count = 262144

Spin up the service

Full list of steps to perform before spinning up the service:

  1. Extract the Terminology Server release archive to a folder. E.g. /opt/snow-owl

  2. (Optional) Obtain an SSL certificate

    1. Make sure a DNS A record is routed to the host's public IP address

    2. Go into the folder ./snow-owl/docker/cert

    3. Execute the ./init-certificate.sh script:

    ./init-certificate.sh -d snow-owl.example.com
  3. (Optional) Configure access for managed Elasticsearch Cluster (elastic.co)

  4. (Optional) Extract dataset to ./snow-owl/resources where folder structure should look like ./snow-owl/resources/indexes/nodes/0 at the end.

  5. Verify file ownership to be UID=1000 and GID=0:

    chmod -R 1000:0 ./snow-owl/docker ./snow-owl/logs ./snow-owl/resources
  6. Check any credentials or settings that need to be changed in ./snow-owl/docker/.env

  7. Authenticate with our private docker registry while in the folder ./snow-owl/docker:

    cat docker_login.txt | docker login -u <username> --password-stdin https://docker.b2ihealthcare.com
  8. Issue a pull (in folder ./snow-owl/docker)

    docker compose pull
  9. Spin up the service (in the folder ./snow-owl/docker)

    docker compose up -d
  10. Verify that the REST API of the Terminology Server is available at:

    1. With SSL: https://snow-owl.example.com/snowowl

    2. Without SSL: http://hostname:8080/snowowl

  11. Verify that the server and cluster status is GREEN by querying the following REST API endpoint:

    1. With SSL:

      curl https://snow-owl.example.com/snowowl/info
    2. Without SSL:

      curl http://hostname:8080/snowowl/info

Advanced configuration

Config files location

Snow Owl has three configuration files:

  • snowowl.yml for configuring Snow Owl

  • serviceability.xml for configuring Snow Owl logging

  • elasticsearch.yml for configuring the underlying Elasticsearch instance in case of embedded deployments

These files are located in the config directory, whose default location depends on whether or not the installation is from an archive distribution (tar.gz or zip) or a package distribution (Debian or RPM packages).

For the archive distributions, the config directory location defaults to $SO_PATH_HOME/configuration. The location of the config directory can be changed via the SO_PATH_CONF environment variable as follows:

Alternatively, you can export the SO_PATH_CONF environment variable via the command line or via your shell profile.

For the package distributions, the config directory location defaults to /etc/snowowl. The location of the config directory can also be changed via the SO_PATH_CONF environment variable, but note that setting this in your shell is not sufficient. Instead, this variable is sourced from /etc/default/snowowl (for the Debian package) and /etc/sysconfig/snowowl (for the RPM package). You will need to edit the SO_PATH_CONF=/etc/snowowl entry in one of these files accordingly to change the config directory location.

Config file format

Settings can also be flattened as follows:

Environment variable substitution

Environment variables referenced with the ${...} notation within the configuration file will be replaced with the value of the environment variable, for instance:

Setting JVM options

The preferred method of setting JVM options (including system properties and JVM flags) is via the SO_JAVA_OPTS environment variable. For instance:

When using the RPM or Debian packages, SO_JAVA_OPTS can be specified in the system configuration file.

Virtual memory

Snow Owl uses a mmapfs directory by default to store its data. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions.

On Linux, you can increase the limits by running the following command as root:

To set this value permanently, update the vm.max_map_count setting in /etc/sysctl.conf. To verify after rebooting, run sysctl vm.max_map_count.

The RPM and Debian packages will configure this setting automatically. No further configuration is required.

Stop Snow Owl

An orderly shutdown of Snow Owl ensures that Snow Owl has a chance to clean up and close outstanding resources. For example, an instance that is shutdown in an orderly fashion will initiate an orderly shutdown of the embedded Elasticsearch instance, gracefully close and disconnect connections and perform other related cleanup activities. You can help ensure an orderly shutdown by properly stopping Snow Owl.

If you’re running Snow Owl as a service, you can stop Snow Owl via the service management functionality provided by your installation.

If you’re running Snow Owl directly, you can stop Snow Owl by sending Ctrl-C if you’re running Snow Owl in the console, or by invoking the provided shutdown script as follows:

Logging configuration

The logging configuration file (serviceability.xml) can be used to configure Snow Owl logging. The logging configuration file location depends on your installation method, by default it is located in the ${SO_HOME}/configuration folder.

Configure Snow Owl

Elasticsearch settings

By default, Snow Owl includes the OSS version of Elasticsearch and runs it in embedded mode to store terminology data and make it available for search. This is convenient for single-node environments (eg. for evaluation, testing and development), but it might not be sufficient when you go into production.

To configure Snow Owl to connect to an Elasticsearch cluster, change the clusterUrl property in the snowowl.yml configuration file:

The value for this setting should be a valid HTTP URL point to the HTTP API of your Elasticsearch cluster, which by default runs on port 9200.

Path settings

If you are using the .zip or .tar.gz archives, the data and logs directories are sub-folders of $SO_HOME. If these important folders are left in their default locations, there is a high risk of them being deleted while upgrading Snow Owl to a new version.

In production use, you will almost certainly want to change the locations of the data and log folders.

The RPM and Debian distributions already use custom paths for data and logs.

Network settings

To allow clients to connect to Snow Owl, make sure you open access to the following ports:

  • 8080/TCP:: Used by Snow Owl Server's REST API for HTTP access

  • 8443/TCP:: Used by Snow Owl Server's REST API for HTTPS access

  • 2036/TCP:: Used by the Net4J binary protocol connecting Snow Owl clients to the server

Setting the heap size

By default, Snow Owl tells the JVM to use a heap with a minimum and maximum size of 2 GB. When moving to production, it is important to configure heap size to ensure that Snow Owl has enough heap available.

To configure the heap size settings, change the -Xms and -Xmx settings in the SO_JAVA_OPTS environment variable.

The value for these setting depends on the amount of RAM available on your server and whether you are running Elasticsearch on the some node as Snow Owl (either embedded or as a service) or running it in its own cluster. Good rules of thumb are:

  • Set the minimum heap size (Xms) and maximum heap size (Xmx) to be equal to each other.

  • Too much heap can subject to long garbage collection pauses.

  • Set Xmx to no more than 50% of your physical RAM, to ensure that there is enough physical RAM left for kernel file system caches.

  • Snow Owl connecting to a remote Elasticsearch cluster requires less memory, but make sure you still allocate enough for your use cases (classification, batch processing, etc.).

Enjoy using the Snow Owl Terminology Server

The configuration format is . Here is an example of changing the path of the data directory:

Snow Owl uses and for logging.

Extensive information on how to customize logging and all the supported appenders can be found on the .

🎉
SO_PATH_CONF=/path/to/my/config ./bin/startup
path:
    data: /var/lib/snowowl
path.data: /var/lib/snowowl
repository.host: ${HOSTNAME}
repository.port: ${SO_REPOSITORY_PORT}
SO_PATH_CONF
SO_PATH_CONF
SO_PATH_CONF
export SO_JAVA_OPTS="$SO_JAVA_OPTS -Djava.io.tmpdir=/path/to/temp/dir"
./bin/startup
sysctl -w vm.max_map_count=262144
./bin/shutdown
repository:
  index:
    clusterUrl: http://your.es.cluster:9200 # the ES cluster URL
    clusterUsername: snowowl # Optional username to connect to a protected ES cluster
    clusterPassword: snowowl_password # Optional password to connect to a protected ES cluster
path:
  data: /var/data/snowowl
# Set the minimum and maximum heap size to 12 GB.
SO_JAVA_OPTS="-Xms12g -Xmx12g" ./bin/startup
YAML
SLF4J
Logback
Logback documentation

Elasticsearch configuration

By default, Snow Owl is starting and connecting to an embedded Elasticsearch cluster available on http://localhost:9200. This cluster has only a single node and its discovery method is set to single-node, which means it is not able to connect to other Elasticsearch clusters and will be used exclusively by Snow Owl.

This single-node Elasticsearch cluster can easily serve Snow Owl in testing, evaluation and small authoring environments, but it is recommended to customize how Snow Owl connects to an Elasticsearch cluster in larger environments (especially when planning to scale with user demand).

You have two options to configure Elasticsearch used by Snow Owl.

Configure the embedded instance

The first option is to configure the underlying Elasticsearch instance by editing the configuration file elasticsearch.yml, which depending on your installation is available in the configuration directory (you can create the file, if it is not available, Snow Owl will pick it up during the next startup).

Connect to a remote cluster

The second option is to configure Snow Owl to use a remote Elasticsearch cluster without the embedded instance. To use this feature you need to set the repository.index.clusterUrl configuration parameter to the remote address of your Elasticsearch cluster. When Snow Owl is configured to connect to a remote Elasticsearch cluster, it won't boot up the embedded instance, which reduces the memory requirements of Snow Owl.

Security

Snow Owl security features enable you to easily secure your terminology server. You can password-protect your data as well as implement more advanced security measures such as role-based access control and auditing.

Realms

By default, Snow Owl comes without any security features enabled and all read and write operations are unprotected. To configure a security realm, you can choose from the following built-in identity providers:

Authentication

NOTE: It is recommended in production environments that all communication between a client and Snow Owl is performed through a secure connection.

Snow Owl sends a HTTP 401 Unauthorized response if a request needs to be authenticated.

Authorization

If supported by the security realm, Snow Owl will also check whether an authenticated user is permitted to perform the requested action on a given resource.

Within an organization, roles are created for various job functions. The permissions to perform certain operations are assigned to specific roles. Members, staff, or other system users are assigned particular roles, and through those role assignments acquire the permissions needed to perform particular system functions. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user or changing a user's department.

Rules

  1. Role assignment: A subject can exercise a permission only if the subject has selected or been assigned a role.

  2. Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject's active role.

With rules 1 and 2, it is ensured that users can exercise only permissions for which they are authorized.

S = Subject = A person or automated agent R = Role = Job function or title which defines an authority level P = Permissions = An approval of a mode of access to a resource

Permissions

In Snow Owl a permission is a single value that represents both the operation the user would like to perform and the resource that is being accessed. The format is the following: <operation>:<resource>

Currently there are 7 operations supported by Snow Owl:

  • browse - read the contents of a resource

  • edit - write the contents of the resource, delete the resource

  • import - import from external content and formats

  • export - export to external content and formats

  • version - create a version in a Code System, create a release

  • promote - merge content from isolated branch environments to a Code System's development version

  • classify - run classifiers and save their results

Resources represent the content that is being accessed by a client. A resource can be anything that can be resolved to a database entry. Currently, the following resource formats are allowed to be used in a permission:

  • <repositoryId> - access the entire content available in a terminology repository

  • <repositoryId>/<branch> - access the content available on a branch in a terminology repository

  • <codeSystemId> - access all content of a Code System, including both the latest development and all previous releases

  • <codeSystemId>/<versionId> - access a specific release of a Code System

There is a special * wild card character that can be used for both the operation and resource parts in a permission value to allow any operation to be performed on any or selected resources, or to allow certain operations to be performed on any available resources.

Examples:

  • browse:snomedStore - browse all SNOMED CT Code Systems and their content

  • edit:SNOMEDCT-UK-CL - edit the SNOMEDCT-UK-CL Code System

  • export:SNOMEDCT-US/2019-03-01 - export the 2019-03-01 US Extension release

  • *:SNOMEDCT - allow any operations to be performed on the SNOMEDCT Code System

  • browse:* - allow read operations on all available resources

  • *:* - administrator permission, the user can do anything with any of the available resources

After configuring at least one security realm, Snow Owl will authenticate all incoming requests to ensure that the sender of the request is allowed to access the terminology server and its contents. To authenticate a request, the client must send an HTTP Basic or Bearer Authorization header with the request. The value should be a user/pass pair in case of using Basic authentication or a token generated by Snow Owl if using the Bearer method.

Configure a file realm
Configure an LDAP realm
JWT

LDAP realm

You can configure security to communicate with a Lightweight Directory Access Protocol (LDAP) server to authenticate and authorize users.

To integrate with LDAP, you configure an ldap realm in the snowowl.yml configuration file.

identity:
  providers:
    - ldap:
        uri: <ldap_uri>
        bindDn: cn=admin,dc=snowowl,dc=b2international,dc=com
        bindDnPassword: <adminpwd>
        baseDn: dc=snowowl,dc=b2international,dc=com
        roleBaseDn: {baseDn}
        userFilter: (objectClass={userObjectClass})
        roleFilter: (objectClass={roleObjectClass})
        userObjectClass: inetOrgPerson
        roleObjectClass: groupOfUniqueNames
        userIdProperty: uid
        permissionProperty: description
        memberProperty: uniqueMember
        usePool: false

Configuration

The following configuration settings are supported:

Configuration
Description

uri

The LDAP URI that points to the LDAP/AD server to connect to.

bindDn

The user's DN who has access to the entire baseDn and roleBaseDn and can read content from it.

bindDnPassword

The password of the bindDn user.

baseDn

The base directory where all entries in the entire subtree will be considered as potential matches for all searches.

roleBaseDn

Alternative base directory where all role entries in the entire subtree will be considered. Defaults to the baseDn value.

userFilter

The search filter to search for user entries under the configured baseDn. Defaults to (objectClass={userObjectClass}).

roleFilter

The search filter to search for role entries under the configured roleBaseDn. Defaults to (objectClass={roleObjectClass}).

userObjectClass

The user object's class to look for when searching for user entries. Defaults to inetOrgPerson class.

roleObjectClass

The role object's class to look for when searching for role entries. Defaults to groupOfUniqueNames class.

userIdProperty

The userId property to access and read for the user's unique identifier. Usually their username or email address. Defaults to uid property.

permissionProperty

A multi-valued property that is used to store permission information on a role. Defaults to the description property.

memberProperty

A multi-valued property that is used to store and retrieve user dns that belong to a given role. Defaults to the uniqueMember property.

The default configuration values are selected to support both OpenLDAP and Active Directory without needing to customize the default schema that comes with their default installation.

Configure Authentication

When users send their username and password with their request in the Authorization header, the LDAP security realm performs the following steps to authenticate the user:

  1. Searches for a user entry in the configured baseDn to get the DN

  2. Authenticates with the LDAP instance using the received DN and the provided password

If any of the above-mentioned steps fails for any reason, the user is not allowed to access the terminology server's content and the server will respond with HTTP 401 Unauthorized.

To configure authentication, you need to configure the uri, baseDn, bindDn, bindDnPassword, userObjectClass and userIdProperty configuration settings.

Adding a user

To add a user in the LDAP realm, create an entry under the specified baseDn using the configured userObjectClass as class and the userIdProperty as the property where the user's username/e-mail address is configured.

Example user entry:

dn: cn=John Doe+sn=Doe+uid=johndoe@b2international.com,dc=snowowl,dc=b2international,dc=com
objectClass: inetOrgPerson
objectClass: organizationalPerson
objectClass: person
objectClass: top
cn: John Doe
sn: Doe
uid: johndoe@b2international.com
userPassword: <encrypted_password> 

Configure Authorization

On top of the authentication part, the LDAP realm provides configuration values to support full role-based access control and authorization.

When a user's request is successfully authenticated with the LDAP realm, Snow Owl authorizes the request using the user's currently set roles and permissions in the configured LDAP instance.

Adding a role

To add a role in the LDAP realm, create an entry under the specified baseDn using the configured roleObjectClass as class and the configured permissionProperty and memberProperty properties for permission and user mappings, respectively.

Example read-only role:

dn: cn=Browser,dc=snowowl,dc=b2international,dc=com
objectClass: top
objectClass: groupOfUniqueNames
cn: Browser
description: browse:*
description: export:*
uniqueMember: cn=John Doe+sn=Doe+uid=johndoe@b2international.com,dc=snowowl,dc=b2international,dc=com 

Terminology Standards

This section describes the various supported healthcare standards, how they can be imported into a Snow Owl Terminology Server, and how they can be accessed and distributed further.

List of officially supported standards

SNOMED CT

Introduction

The Snow Owl Terminology Server is capable of managing multiple SNOMED CT editions and/or extensions for both distribution and authoring purposes in a single deployment. This guide describes the typical scenarios, like creating, managing, releasing and upgrading SNOMED CT Extensions in great detail with images. If you are unfamiliar with SNOMED CT Extensions, the next section walks you through their logical model and basic characteristics, while the following pages describe distribution and authoring scenarios as well as how to use the Snow Owl Terminology Server for SNOMED CT Extensions.

What is a SNOMED CT Extension?

The official SNOMED CT Extension Practical Guide has been used to help produce the content available on this page: https://confluence.ihtsdotools.org/display/DOCEXTPG

Common Structure

SNOMED CT is a multilingual clinical terminology that covers a broad scope. However, some users may need additional concepts, relationships, descriptions or reference sets to support national, local or organizational needs.

The extension mechanism allows SNOMED CT to be customized to address the terminology needs of a country or organization that are not met by the International Edition.

A SNOMED CT Extension may contain components and/or derivatives (e.g. reference sets used to represent subsets, maps or language preferences). Since the international edition and all extensions share a common structure, the same application software can be used to enter, store and process information from different extensions. Similarly, reference sets can be constructed to refer to content from both the international release and extensions. The common structure also makes it easier for content developed by an extension producer to be submitted for possible inclusion in a National Edition or the International Edition.

Namespace

Extensions are managed by SNOMED International, and Members or Affiliate Licensees who have been issued a namespace identifier by SNOMED International. A namespace identifier is used to create globally unique SNOMED CT identifiers for each component (i.e. concept, description and relationship) within a Member or Affiliate extension. This ensures that references to extension concepts contained in health record data are unambiguous and can be clearly attributed to a specific issuing organization.

A national or local extension uses a namespace identifier issued by SNOMED International to ensure that all extension components can be uniquely identified (across all extensions).

Modules

Every SNOMED CT Extension includes one or more modules, and each module contains either SNOMED CT components or reference sets (or both). Modules may be dependent on other modules. A SNOMED CT Edition includes the contents of a focus module together with the contents of all the modules on which it depends. This includes the modules in the International Edition and possibly other modules from a national and/or local extension.

An edition is defined based on a single focus module. This focus module must be the most dependent module, in that the focus module is dependent on all the other modules in the edition.

Language

SNOMED CT extensions can support a variety of use cases, including:

Translating SNOMED CT, for example

  • Adding terms used in a local language or dialect

  • Adding terms used by a specific user group, such as patient-friendly terms

Representing language, dialect or specialty-specific term preferences is possible using a SNOMED CT extension. The logical design of SNOMED CT enables a single clinical idea to be associated with a range of terms or phrases from various languages, as depicted in Figure 3.1-1 below. In an extension, terms relevant for a particular country, speciality, hospital (or other organization) may be created, and different options for term preferences may be specified. Even within the same country, different regional dialects or specialty-specific languages exist may influence which synonyms are preferred. SNOMED CT supports this level of granularity for language preferences at the national or local level.

Dependency

A SNOMED CT extension is a set of components and reference set members that add to the SNOMED CT International Edition. An extension is created, structured, maintained and distributed in accordance with SNOMED CT specifications and guidelines. Unlike, the International Edition an extension is not a standalone terminology. The content in an extension depends on the SNOMED CT International Edition, and must be used together with the International Edition and any other extension module on which it depends.

Versions

A specific version of an extension can be referred to using the date on which the extension was published.

There are many use cases that require a date specific version of an edition, including specifying the substrate of a SNOMED CT query, and specifying the version of SNOMED CT used to code a specific data element in a health record. A versioned edition includes the contents of the specified version of the focus module, plus the contents of all versioned modules on which the versioned focus module depends (as specified in the |Module dependency reference set|). The version of an edition is based on the date on which the edition was released. Many extension providers release their extensions as a versioned edition, using regular and predictable release cycles.

Characteristics

To summarize, a SNOMED CT Extension has the following characteristics:

  • Uses the same RF2 structure as the SNOMED CT International Edition

  • Uses a single namespace identifier to globally identify its content

  • Uses one or more modules to categorize the content into groups

  • Uses one or more languages to support specific user groups and patient-friendly terms

  • Depends on the SNOMED CT International Edition

  • Uses versions (effective times) to identify its content across multiple releases

Now that we have a clear understanding of what SNOMED CT Extensions are, let's take a look at how can we use them in Snow Owl.

File realm

You can manage and authenticate users with the built-in file realm. All the data about the users for the file realm is stored in the users file. The file is located in SO_PATH_CONF and is read on startup.

You need to explicitly select the file realm in the snowowl.yml configuration file in order to use it for authentication.

In the above configuration the file realm is using the users file to read your users from. Each row in the file represents a username and password delimited by : character. The passwords are BCrypt encrypted hashes. The default users file comes with a default snowowl user with the default snowowl password.

Users Command

To simplify file realm configuration, the Snow Owl CLI comes with a command to add a user to the file realm (snowowl users add). See the command help manual (-h option) for further details.

Authorization

Therefore, a SNOMED CT Extension uses the same Release Format version 2 as the International Edition, they share a common structure and schema (see ).

Therefore, a SNOMED CT Extension uses a single namespace identifier to identify all core components in the SNOMED CT Extension (see ).

Therefore, a SNOMED CT Extension uses one or more modules to categorize the components into meaningful groups (see ).

Therefore, an Extension can have its own language to support patient-friendly terms, local user groups, etc. (see ).

Therefore, a SNOMED CT Extension depends on the SNOMED CT International Edition directly or indirectly through another SNOMED CT Extension (see ).

Therefore, a SNOMED CT Extension can be versioned and have a different release cycle than the SNOMED CT International Edition (see ).

The file security realm does NOT support the Authorization formats at the moment. If you are interested in configuring role-based access control for your users, it is recommended to switch to the .

SNOMED CT
LOINC
Socialstyrelsen Standards
identity:
  providers:
    - file:
        name: users
Extensions and Snow Owl
Release Format 2 specification
Namespace identifier
Modules
Purpose
Extensions
Versions
LDAP security realm

Extensions and Snow Owl

Snow Owl Concepts

Snow Owl uses the following basic concepts to provide authoring and maintenance support for SNOMED CT Extensions.

Code Systems

From the getting started page, we've learned what is a Repository and how Code Systems are defined as part of a Repository.

Reminder: A repository is a set of schemas and functionality to provide support for a dedicated set of Code Systems, eg. the SNOMED CT Repository stores all SNOMED CT related components under revision control and provides quick access). A Repository can contain one or more Code Systems and by default always comes with one predefined Code System, the root Code System (in the case of SNOMED CT, this often represents the International Edition).

SNOMED CT Extensions in Snow Owl are Code Systems with their own set of properties and characteristics. With Snow Owl's Code System API, a Code System can be created for each SNOMED CT Extension to easily identify the Code System and its components with a single unique identifier, called the Code System short name. The recommended naming approach when selecting the unique short name identifier is the following:

  • SNOMED CT International Edition: SNOMEDCT - often included in other editions for distribution purposes

  • National Release Center (single maintained extension) - SNOMEDCT-US - represents the SNOMED CT United States of America Extension

  • National Release Center (multiple maintained extensions) - SNOMEDCT-UK-CL, SNOMEDCT-UK-DR - United Kingdom Clinical and Drug Extensions, respectively

  • Care Provider with a special extension based on a national extension - SNOMEDCT-US-UNMC - University of Nebraska Medical Center's extension builds on top of the SNOMEDCT-US extension

The primary namespace identifier and set of modules and languages can be set during the creation of the Code System, and can be updated later on if required. These properties can be used when users are accessing the terminology server for authoring purposes to provide a seamless authoring experience for the user without them needing to worry about selecting the proper namespace, modules, language tags, etc. (NOTE: this feature is not available yet in the OSS version of Snow Owl)

Extension Of

A Snow Owl Code System can be marked as an extensionOf another Code System, which ties them together, forming a dependency between the two Code Systems. A Code System can have multiple Extension Code Systems, but a Code System can only be extensionOf a single Code System.

Branching

In Snow Owl, a Repository maintains a set of branches, and Code Systems are always attached to a dedicated branch. For example, the default root Code Systems are always tied to the default branch, called MAIN. When creating a new Code System, the "working" branchPath can be specified and doing so assigns the branch to the Code System. A Code System cannot be attached to multiple branches at the same time, and a branch can only be assigned to a single Code System in a Repository. Snow Owl's branching infrastructure allows the use of isolated environments for both distribution and authoring workflows, therefore they play a crucial role in SNOMED CT Extension management as well. They also provide the support for seamless upgrade mechanism, which can be done whenever there is a new version available in one of your SNOMED CT Extension's dependent Code Systems.

Versions

As in real life, a Code System can have zero or more versions (or with another name, releases). A version is a special branch that is created during the versioning process and makes the currently available latest content accessible later in its current form. Since SNOMED CT Extensions can have releases as well, creating a Code System Version in Snow Owl is a must to produce the release packages.

Examples

The following image shows the repository content rendered from the available commits, after a successful International Edition import.

Dots represent commits made with the commit message on the right. Green boxes represent where the associated branch's HEAD is currently located. Blue tag labels represent versions created during the commit.

If your use case would be to import the SNOMED CT US Extension 2019-09-01 version into this repository, then ideally it would look like this:

The next section describes the use case scenarios in the world of SNOMED CT and the recommended approaches for deploying these scenarios in Snow Owl.

Scenarios

This section describes the use case scenarios present in the world of SNOMED CT and how Snow Owl can be used in those scenarios to maximize its full potential. Each scenario comes with a summary and a pros/cons section to help your decision making process when selecting the appropriate scenario for your use case.

Single Extension Authoring

A typical extension scenario is the development of the extension itself. Whether you are starting your extension from scratch or already have a well-developed version that you need to maintain, the first choice you need to make is to identify the dependencies of your SNOMED CT Extension.

Extending the International Edition

If your Extension extends the SNOMED CT International Edition directly, then you need to pick one of the available International Edition versions:

  • If you are starting from scratch, it is always recommended to select the latest International Release as the starting point of your Extension.

  • If you have an existing Extension then you probably already know the International Release version your Extension depends on.

When you have identified the version you need to depend on then you need to import that version (or a later release packages that also includes that version in its FULL RF2 package) first into Snow Owl. Make sure that the createVersion feature of the RF2 import process is enabled, so it will automatically create the versions for each imported RF2 effectiveTime value.

After you have successfully created the Code System representing your Extension, you can import any existing content from a most recent release or start from scratch by creating the module concept of your extension.

#

Extending another Extension

If your Extension needs to extend another Extension and not the International Edition itself, then you need to identify the version you'd like to depend on in that Extension (that indirectly will select the International Edition dependency as well). When you have identified all required versions, then starting from the International Edition recursively traverse back and repeat the RF2 Import and Code System creation steps described in the previous section until you have finally imported your extension. In the end your extension might look like this, depending on how many Extensions you are depending on.

Summary

Pros:

  • Excellent for authoring and maintenance

  • Good for distribution

Cons:

  • Harder to set up the initial deployment

Snow Owl is a multi-purpose terminology server with a main focus on SNOMED CT International Edition and its Extensions. Whether you are a producer of a SNOMED CT Extension or a consumer of one, Snow Owl has you covered. As always, feel free to ask your questions regarding any of the content you read here (raise a ticket on ).

After you have successfully imported all dependencies into Snow Owl, the next step is to create a Code System that represents your SNOMED CT Extension (see ). When creating the Code System, besides specifying the namespace and optional modules and languages, you need to enter a Code System shortName, which will serve as the unique identifier of your Extension and select the extensionOf value, which represents the dependency of the Code System.

RF2 releases tend to have content issues with the International Edition itself or refer to missing content when you try to import them into Snow Owl via the RF2 Import API. For this reason, the recommended way is to always use the most recent Snapshot RF2 release of a SNOMED CT Extension to form its first representation in Snow Owl. That has a high probability of success without any missing component dependency errors during import. If you are having trouble importing an RF2 Release Package into Snow Owl, feel free to raise a question on our page.

Setting up a Snow Owl deployment like this is not an easy task. It requires a thorough understanding of each SNOMED CT Extension you'd like to import and their dependencies as well. However, after the initial setup, the maintenance of your Extension becomes straightforward, thanks to the clear distinction from the International Edition and from its other dependencies. The release process is easier and you can choose to publish your Extension as an extension only release, or as an Edition or both (see ). Additionally, when a new version is available in one of the dependencies, you will be able to upgrade your Extension with the help of automated validation rules and upgrade processes (see ). From the distribution perspective, this scenario shines when you need to maintain multiple Extensions/Editions in a single deployment.

GitHub Issues
Single Edition
Single Extension
Multi Extension
Core API
GitHub Issues
Release
Upgrade

Single Edition

The most common use case to consume a SNOMED CT Release Package is to import it directly into a Terminology Server (like Snow Owl) and make it available as read-only content for both human and machine access (via REST and FHIR APIs).

SNOMED CT International Edition

SNOMED CT Extension Edition

Summary

The single edition scenario without much effort provides access to any SNOMED CT Edition directly on the pre-initialized SNOMEDCT Code System. It is easy to set up and maintain. Because of its flat structure, it is good for distribution and extension consumers. Although it can be used for authoring in certain scenarios, due to the missing distinction between the International Edition and the Extension, it is not the best choice for extension authoring and maintenance.

This scenario can be further extended to support multiple simultaneous Edition releases living on their own dedicated SNOMED CT Code Systems. The Root SNOMEDCT Code System in this case is empty and only serves the purpose of creating other Code Systems "underneath" it. Each SNOMED CT Code System is then imported into its own dedicated branch forming a star-like branch structure at the end (zero-length MAIN branch and content branches). This is useful in distribution scenarios, where multiple Extension Code Systems need to be maintained with their own dedicated set of dependencies and there is no time to set up the proper Extension Scenario (see next section). The only drawback of this setup is the potentially high usage of disk space due to the overlap between the various Editions imported into their own Code Systems (since each of them contains the entire International Release).

Pros:

  • Good for maintaining the SNOMED CT International Edition

  • Good for distribution

  • Simple to set up and maintain

Cons:

  • Not recommended for extension authoring and maintenance

  • Not recommended for multi-extension distribution scenarios

Upgrading

Maintenance of a SNOMED CT Extension is essential to ensure that

  • it incorporates changes requested by terminology consumers

  • it remains aligned with the SNOMED CT International Edition

While both of these maintenance related tasks are potentially assigned to one of the upcoming Extension development cycles, there is a clear distinction between the two maintenance tasks.

Change requests

Changes requested by your terminology consumers are typically content authoring tasks that you would assign to an Extension authoring team. They usually come with a well-described problem you need to address in the terminology as you would do in the usual development cycle.

International Edition Changes

Aligning content to the SNOMED CT International Edition is one of the main responsibilities of an Extension maintainer. However, keeping up with the changes introduced in SNOMED CT International Edition biannually (on January 31st and July 31st) can be an overwhelming task, especially if:

  • you are under pressure from your terminology consumers to make the requested changes ASAP, especially in mission critical scenarios.

  • the changes introduced in the International Edition are conflicting with your local changes and/or causing maintenance related issues after the upgrade.

To address SNOMED CT International Edition upgrade tasks in a reliable and reproducible way, Snow Owl offers an upgrade flow for SNOMED CT Extensions.

Upgrades

A Code System upgrade in Snow Owl is a complex workflow with states and steps. The workflow involves a special Upgrade Code System, a series of automated migration processes and validation rules to ensure the quality and reliability of the operation. The upgrade can be done quickly if there were no conflicts between the Extension and the International Edition. However, updates can also be a long-running process spanning over many months when significant structural changes (e.g. in substances, anatomy, or modeling approach) are made in the International Edition.

Starting the Upgrade

In Snow Owl, SNOMED CT Extension are linked to their SNOMED CT dependency with the extensionOf property. This property describes the International Edition and its version that the Extension depends on. For example, the SNOMEDCT/2019-07-31 value specifies that our Extension depends on the 2019-07-31 version of the International Edition.

Extension upgrades can be started when there is a new version available in the Extension/Edition we have selected as our dependency in the extensionOf property. When fetching a SNOMED CT Code System via the Code System API, Snow Owl will check if there are any upgrades availables and return them in the availableUpdates array property. If there are no upgrades available the array will be empty.

When the upgrade is started, Snow Owl creates a special <codeSystemId>-UP-<newExtensionOf> (eg. SNOMEDCT-MYEXT-UP-SNOMEDCT-2020-01-31) Code System to allow authors and the automated processes to migrate the latest development version of the Extension to the new dependency.

Regular Maintenance

Regular daily Extension development tasks still need to be resolved and pushed somewhere in order to continue the development of the Extension, even if an upgrade process is in progress. Each Extension still has an active development version, even if an upgrade is in progress, which can be used to push daily maintenance changes and business as usual tasks.

Changes pushed to the development area will regularly need to be synced with the upgrade until the upgrade completes, so the upgrade team will be able to resolve all remaining conflicts and issues.

Upgrade Checks

Upgrade Checks ensure the quality of the upgrade process and execute certain tasks/checks automatically. An Upgrade Check can be any logic or function to be run during the upgrade. Upgrade Checks can access the underlying upgrade Code System's content and report any issues (validation rules) or fix content automatically (migration rules). For example, a validation rule (like Active relationships must have active source, type, destination references) can be executed after each change pushed to the upgrade branch to verify whether there is any potentially invalid relationship left to fix or you are ready to go.

Completing the Upgrade

Once the upgrade authoring team is done with the necessary changes to align the Extension with the new International Edition and all the checks are completed successfully the upgrade can be completed. Completing the upgrade performs the following steps:

  • Creates a <codeSystemId>-DO-<previousExtensionOf> Code System to refer to the previous state of the Extension

  • Changes the current working branch of the Extension Code System to the branch that was used during the upgrade process

  • Deletes the <codeSystemId>-UP-<newExtensionOf> Code System, which marks the upgrade complete, and the upgrade itself cannot be accessed anymore.

Multi Extension Authoring

Multi Extension Authoring and Distribution

On top of single Edition/Extension distribution and authoring, Snow Owl provides full support for multi-SNOMED CT distribution and authoring even if the Extensions depend on different versions of the SNOMED CT International Edition.

Next steps

After you have initialized your Snow Owl instance with the Extensions you'd like to maintain the next steps are:

Since Snow Owl by default comes with a pre-initialized SNOMED CT Code System called SNOMEDCT, it is just a single call to import the official RF2 package using the The import by default creates a Code System Version for each SNOMED CT Effective Date available in the supplied RF2 package. After a successful import the content is immediately available via REST and FHIR APIs.

National Release Centers and other Care Providers provide their own SNOMED CT Edition distribution for third-party consumers in RF2 format. Importing their Edition distribution instead of the International Edition directly into the SNOMEDCT pre-initialized SNOMED CT Code System with the same makes both the International Edition (always included in Edition packages) and the National Extension available for read-only access.

See additional Extension maintenance related material in the official .

See the section on how you can address change requests and incorporate them as regular tasks into the main version of your Extension.

To start an Extension upgrade to a newer International Edition (or to a newer Extension dependency version), you can use the . The only thing that needs to be specified there is the desired new version of the Extension's extensionOf dependency.

To achieve a deployment like this you need to perform the same initialization steps for each desired SNOMED CT Extension as if it were a single extension scenario (see ). Development and maintenance of each managed extension can happen in parallel without affecting one or the other. Each of them can have their own release cycles, maintenance and upgrade schedules, and so on.

SNOMED CT RF2 Import API
SNOMED CT RF2 Import API
Integrate
Extensions Practical Guide
Extension Development
Upgrade API
single extension
Development
Release
Upgrade

Socialstyrelsen Standards

(Pro feature)

This page describes the terminology standards offered publicly by the Swedish National Board of Health and Welfare:

Releases

When an Extension reaches the end of its current development cycle, it needs to be prepared for release and distribution.

Workflows and Authoring Branches

All planned content changes that are still on their dedicated branch either need to be integrated with the main development version or removed from the scope of the next release.

Prepare the Release

After all development branches have been merged and integrated with the main work-in-progress version, the Extension needs to be prepared for release. This usually involves last minute fixes, running quality checks and validation rules and generating the final necessary normal form of the Extension.

Release

When all necessary steps have been performed successfully, a new Code System Version needs to be created in Snow Owl to represent the latest release. The versioning process will assign the requested effectiveTime to all unpublished components, update the necessary Metadata reference sets (like the Module Dependency Reference Set) and finally create a version branch to reference this release later.

Packaging

After a successful release, an RF2 Release Package needs to be generated for downstream consumers of your Extension. Snow Owl can generate this final RF2 Release Packages for the newly released version via the RF2 Export API.

https://www.socialstyrelsen.se/statistik-och-data/klassifikationer-och-koder/

LOINC

(Pro feature)

As with every other resource in Snow Owl, a LOINC Code System needs to be created using the CodeSystems API first:

POST /codesystems
{
  "id": "LOINC",
  "url": "http://hl7.org/fhir/sid/loinc",
  "title": "LOINC",
  "language": "en",
  "description": "LOINC is a freely available international standard for tests, measurements, and observations",
  "status": "active",
  "copyright": "This material contains content from LOINC (http://loinc.org). LOINC is copyright ©1995-2023, Regenstrief Institute, Inc. and the Logical Observation Identifiers Names and Codes (LOINC) Committee and is available at no cost under the license at http://loinc.org/license. LOINC® is a registered United States trademark of Regenstrief Institute, Inc.",
  "owner": "ownerUserId",
  "contact": "https://loinc.org/",
  "oid": "2.16.840.1.113883.6.1",
  "toolingId": "loinc",
  "settings": {
      "publisher": "Regenstrief Institute, Inc.",
  }
}

Then, an official release file can be imported via the following request:

POST /loinc/LOINC/import -F "file=@Loinc_2.77.zip"

And last, create a new version to tag the content:

POST /versions
{
  "resource": "codesystems/LOINC",
  "version": "2.77",
  "description": "LOINC 2.77 2024-02-27 release",
  "effectiveTime": "2024-02-27"
}

Development

During the extension development process authors are:

  • creating, modifying or inactivating content according to editorial principles and policies

  • running validation processes to verify the quality and integrity of their Extension

  • classifying their authored content with an OWL Reasoner to produce its distribution normal form

The authors directly (via the available REST and FHIR APIs) or indirectly (via user interfaces, scripts, etc.) work with the Snow Owl Terminology Server to make the necessary changes for the next planned Extension release.

Workflow and Editing

Authors often require a dedicated editing environment where they can make the necessary changes and let others review the changes they have made, so errors and issues can be corrected before integrating the change with the rest of the Extension. Similarly to how SNOMED CT Extensions are separated from the SNOMED CT International Edition and other dependencies, this can be achieved by using branches.

Authoring APIs

To let authors make the necessary changes they need, Snow Owl offers the following SNOMED CT component endpoints to work with:

  • Description API - to create and edit SNOMED CT Descriptions

  • Relationship API - to create and edit SNOMED CT Relationships

  • Reference Set API - to create and edit SNOMED CT Reference Sets

  • Reference Set Member API - to create and edit SNOMED CT Reference Set Members

Validation

To verify quality and integrity of the changes they have made, authors often generate reports and make further fixes according to the received responses. In Snow Owl, reports and rules can be represented with validation queries and scripts.

  • Validation API - to run validation rules and fetch their reported issues on a per branch basis

Classification

Last but not least, authors run an OWL Reasoner to classify their changes and generate the necessary normal form of their Extension. The Classification API provides support for running these reasoner instances and generating the necessary normal form.

Authoring is the process by which content is created in an extension in accordance with a set of authoring principles. These principles ensure the quality of content and referential integrity between content in the extension and content in the International Edition (the principles are set by SNOMED International, can be found ).

- to create and merge branches

- to compare branches

- to create and edit SNOMED CT Concepts

here
Branching API
Compare API
Concept API

KVÅ (KKÅ/KMÅ)

(Pro feature)

KKÅ

First, create the KKÅ Code System:

POST /codesystems
{
  "id": "kva-kirurgiska",
  "url": "http://klassifikationer.socialstyrelsen.se/kva-kirurgiska",
  "title": "KVÅ – kirurgiska åtgärder (KKÅ)",
  "language": "se",
  "description": "# Klassifikation av kirurgiska åtgärder",
  "status": "active",
  "owner": "ownerUserId",
  "copyright": "",
  "contact": "klassif@socialstyrelsen.se - Avdelningen för register och statistik, Enheten för klassifikationer och terminologi",
  "oid": "1.2.752.116.1.3.2.3.6",
  "toolingId": "lcs",
  "settings": {
    "publisher": "Socialstyrelsen",
    "isPublic": true
  }
}

Then, import the concepts from the official TSV file:

POST /lcs/kva-kirurgiska/import?idColumn=Kod&ptColumn=Titel&synonymColumns=Förkortning(ar)&synonymColumns=Beskrivning&parentColumn=Överordnad%20kod&locale=se -F "file=@kva-kirurgiska-atgarder-kka-2024-01-01.tsv"

And last, create a new version to mark the content:

POST /versions
{
  "resource": "codesystems/kva-kirurgiska",
  "version": "2024-01-01",
  "description": "2024-01-01 release",
  "effectiveTime": "2024-01-01"
}

KMÅ

First, create the KMÅ Code System:

POST /codesystems
{
  "id": "kva-medicinska",
  "url": "http://klassifikationer.socialstyrelsen.se/kva-medicinska",
  "title": "KVÅ – medicinska åtgärder (KMÅ)",
  "language": "se",
  "description": "# Klassifikation av medicinska åtgärder",
  "status": "active",
  "owner": "ownerUserId",
  "copyright": "",
  "contact": "klassif@socialstyrelsen.se - Avdelningen för register och statistik, Enheten för klassifikationer och terminologi",
  "oid": "1.2.752.116.1.3.2.3.5",
  "toolingId": "lcs",
  "settings": {
    "publisher": "Socialstyrelsen",
    "isPublic": true
  }
}

Then, import the concepts from the official TSV file:

POST /lcs/kva-medicinska/import?idColumn=Kod&ptColumn=Titel&synonymColumns=Beskrivning&parentColumn=Överordnad%20kod&locale=se -F "file=@kva-medicinska-atgarder-kma-2024-01-01.tsv"

And last, create a new version to mark the content:

POST /versions
{
  "resource": "codesystems/kva-medicinska",
  "version": "2024-01-01",
  "description": "2024-01-01 release",
  "effectiveTime": "2024-01-01"
}

ICF

(Pro feature)

First, create the ICF Code System:

POST /codesystems
{
  "id": "icf",
  "url": "http://klassifikationer.socialstyrelsen.se/icf",
  "title": "ICF",
  "language": "se",
  "description": "# Internationell klassifikation av funktionstillstånd, funktionshinder och hälsa",
  "status": "active",
  "contact": "klassif@socialstyrelsen.se - Avdelningen för register och statistik, Enheten för klassifikationer och terminologi",
  "owner": "ownerUserId",
  "oid": "1.2.752.116.1.1.3",
  "toolingId": "lcs",
  "settings": {
      "publisher": "Socialstyrelsen",
      "isPublic": true
  }
}

Then, using column mapping import the concepts from the official TSV file:

POST /lcs/icf/import?idColumn=Kod&ptColumn=Titel&synonymColumns=Beskrivning&synonymColumns=Alternativ%20titel&parentColumn=Överordnad%20kod&locale=se -F "file=@icf-2024-01-01.tsv"

And last, create a new version to mark the content:

POST /versions
{
  "resource": "codesystems/icf",
  "version": "2024-01-01",
  "description": "2024-01-01 release",
  "effectiveTime": "2024-01-01"
}

ICD-10-SE

(Pro feature)

First, create the ICD-10-SE Code System:

POST /codesystems
{
  "id": "ICD10SE",
  "url": "http://hl7.org/fhir/sid/icd-10-se",
  "title": "ICD-10-SE",
  "language": "se",
  "description": "# Internationell statistisk klassifikation av sjukdomar och relaterade hälsoproblem (ICD-10-SE)",
  "status": "active",
  "owner": "ownerUserId",
  "copyright": "",
  "contact": "https://www.socialstyrelsen.se/statistik-och-data/klassifikationer-och-koder/kodtextfiler/",
  "oid": "1.2.752.116.1.1.1",
  "toolingId": "icd10",
  "settings": {
    "publisher": "Socialstyrelsen",
    "isPublic": true
  }
}
POST /icd10/ICD10SE/classes/import -F "file=@ICD-10-SE_2024_generated.xml"

And last, create a new version to mark the content:

POST /versions
{
  "resource": "codesystems/ICD10SE",
  "version": "2024-01-01",
  "description": "2024-01-01 release",
  "effectiveTime": "2024-01-01"
}

REST APIs

Alternatives

To simplify integration and enable interoperability with third-party systems, Snow Owl TS offers two forms of accessing terminology resources via HTTP requests:

  • A native API that is customized to match Snow Owl's internal representation of its supported resources, and so can provide more options

The following pages provide additional information on each method of access:

Official Examples

Interactive documentation

Port 8080 and the context path /snowowl is assigned in the default installation for serving both APIs. Navigate to http(s)://<host>:8080/snowowl/ to visit the built-in "interactive playground" which lists all available requests by category in the dropdown on the top left:

Once valid user credentials are entered on the "Authentication" page reachable from the sidebar, it becomes possible to send requests to the server and inspect the returned response body as well as any relevant headers:

Select a request from the sidebar so that its documentation page appears in the main area. Each request is accompanied by a short description and the list of parameters it accepts. Fields marked with a * symbol are required.

Press the Try button after populating the input fields to execute the request:

FHIR API

Introduction

HL7's Fast Healthcare Interoperability Resources (FHIR) standard describes data types, resources, interactions, coded values and their associated code systems that are used to represent and exchange structured clinical data.

Thanks to its pluggable and extensible architecture, Snow Owl TS is able to expose clinically relevant resources like code systems, value sets and concept maps in a format that can be consumed by third-party FHIR clients. Additionally, Snow Owl's revision model allows concurrent management of resource versions.

Interactive documentation

Navigate to http://<host>:8080/snowowl/ and select "FHIR API" from the category dropdown to see the full list of FHIR requests supported by Snow Owl TS:

Request/response formats

JSON and XML formats are both supported; resources in Turtle RDF format are not accepted (nor produced) by the server. The MIME type for these formats can appear in the Accept and Content-Type headers. These are the following:

  • XML

    • application/fhir+xml

    • application/xml

    • text/xml

  • JSON

    • application/fhir+json

    • application/json

    • text/json

Unless explicitly stated in the MIME type, the server accepts and responds in the R5 format. To select an explicit FHIR version format the fhirVersion argument can be used along with the fhir+json/xml media types. For example:

Accept: application/fhir+json;fhirVersion=4.0.1

All declared and provided FHIR endpoints support the following FHIR Specification versions:

Common request parameters

Override response format

Clients can override the desired output format by using the _format query parameter, if they have limited access to request headers. In this case shorthand values like xml and json are also permitted (Content-Type must still be set correctly if the request includes a body):

GET /snowowl/fhir/CodeSystem/SNOMEDCT?_format=xml&_pretty=true

[Response headers]
Content-Type: application/fhir+xml

<CodeSystem xmlns="http://hl7.org/fhir">
  <id value="SNOMEDCT"/>
  <meta>
    <lastUpdated value="2023-10-17T15:03:40.942Z"/>
  </meta>
  <language value="en"/>
  <text>
    <status value="empty"/>
    <div xmlns="http://www.w3.org/1999/xhtml"></div>
  </text>
  <url value="http://snomed.info/sct/900000000000207008"/>
  <name value="SNOMEDCT"/>
  <title value="SNOMED CT International Edition"/>
  <status value="active"/>
  [...]

Snow Owl returns a 406 Not Acceptable response if the client requested a response format it does not support. Conversely, if the request body is in a format it does not recognize, a 415 Unsupported Media Type response is emitted.

Pretty-printing

For development purposes, responses returned from the server can be formatted so they are more pleasing to the human eye. The example above already includes the query parameter that controls this behavior; it is named _pretty. Setting its value to true results in pretty-printed output.

Resource summary

The query parameter _summary controls whether a subset of the elements should be returned for a resource. Supported values are:

  • true -> return a pre-defined subset of elements from the resource (these are marked as "summary" in the FHIR specification)

  • false -> return all elements of the resource

  • text -> return text, id, meta and top-level elements marked as "mandatory" in the FHIR specification (to ensure that the response remains a valid FHIR resource representation)

  • data -> remove the text element that contains a human-readable rendering of the resource, in the form of eg. an XHTML snippet

  • count -> return hit count without the accompanying list of matching resources (applicable in resource search interactions only)

When summary mode is enabled, returned resources are marked with a SUBSETTED code to indicate that certain elements were left out:

GET /snowowl/fhir/CodeSystem/SNOMEDCT?_summary=text

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "CodeSystem",
  "id": "SNOMEDCT",
  "meta": {
    "lastUpdated": "2023-10-17T15:03:40.942Z",
    "tag": [{
      "system": "http://terminology.hl7.org/CodeSystem/v3-ObservationValue",
      "code": "SUBSETTED",
      "display": "As requested, resource is not fully detailed."
    }]
  },
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "status": "active",
  "content": "not-present"
}

Element selection

If none of the _summary modes listed above are appropriate for a use case, clients can select individual elements for inclusion via the _elements query parameter. Its value should be a comma-separated list of element names. Elements marked as "mandatory" are always returned.

As above, the returned resource's meta.tag element will also include a SUBSETTED Coding to indicate that some information has been left out.

Sorting and paging

Clients can indicate the preferred number of results to return on a single page via the _count query parameter. Paging via offsets is not supported, but the response usually includes a link of type "next" to retrieve the next page:

GET /snowowl/fhir/CodeSystem?_count=5

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Bundle",
  "id": "codesystems",
  "meta": {
    "lastUpdated": "2023-11-28T18:37:52.057338Z"
  },
  "type": "searchset",
  "link": [{ 
    "relation": "next",
    "url": "https://<host>/snowowl/fhir/CodeSystem?_count=5&_after=AoIhMDg2dlE2dHd4YkRDNnZHWjUxNGYzWlplR2M="
  }],
  "total": 1676,
  "entry": [
    {
      "fullUrl": "https://<host>/snowowl/fhir/CodeSystem/resource_1",
      "resource": {
        "resourceType": "CodeSystem",
        "id": "resource_1",
        "meta": {
          "lastUpdated": "2023-10-19T13:21:52.216Z"
        },
        "name": "resource_1",
        "title": "First resource",
        ...
      }
    },
    ...
  ]
}

As can be seen from the example, the paging mechanism uses an additional state tracking parameter called _after.

Resource types

Snow Owl only supports a small subset of FHIR's 150+ resource types – the ones that are relevant from a terminology service perspective. These are described on separate pages in detail:

Resource identifiers

The id element of each resource is assigned by Snow Owl in create interactions; the assigned value never changes once it has been set and is unique across resource types. Update interactions that use an identifier which did not exist previously will create a new resource – in this case the identifier is checked for potential collisions first.

Snow Owl represents versioned content as standalone resources when accessed via the FHIR API (the version part is separated from the resource identifiers by a / character which is not allowed to be used in "regular" FHIR identifiers):

GET /snowowl/fhir/CodeSystem/SNOMEDCT/2021-01-31

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "CodeSystem",
  "id": "SNOMEDCT/2021-01-31",
  "meta": {
    "lastUpdated": "2023-10-17T14:59:31.529Z"
  },
  "url": "http://snomed.info/sct/900000000000207008/version/20210131",
  "version": "2021-01-31",
  "name": "SNOMEDCT/2021-01-31",
  "title": "SNOMED CT International Edition",
  "status": "active",
  "date": "2021-01-31T00:00:00Z",
  "publisher": "SNOMED International",
  "content": "not-present",
  "count": 481509,
  ...
}

If there are no versions present for a given resource, or the requested identifier does not include a version part, the latest development version is returned which may include "in-progress" changes. Therefore it is recommended to always query a specific version of any terminology content to get consistent results, especially when the same terminology server instance is being used for both authoring and distribution.

Response status

The following HTTP status codes are used by Snow Owl's FHIR API to indicate the success or failure of an interaction:

HTTP Status
Reason

200

OK

400

Bad Request

401

Unauthorized

403

Forbidden

404

Not Found

500

Internal Error

If an error occurs, a response containing an OperationOutcome resource may be returned to include additional details about the problem at hand:

GET /snowowl/fhir/CodeSystem/abc

[Response headers]
Content-Type: application/fhir+xml

<OperationOutcome>
  <resourceType>OperationOutcome</resourceType>
  <issue>
    <severity>error</severity>
    <code>not_found</code>
    <diagnostics>
      Code System with identifier 'abc' could not be found.
    </diagnostics>
    <details>
      <text>Resource Id 'abc' does not exist</text>
      <coding>
        <code>msg_no_exist</code>
        <system>http://hl7.org/fhir/operation-outcome</system>
        <display>Resource Id 'abc' does not exist</display>
      </coding>
    </details>
    <location>abc</location>
  </issue>
</OperationOutcome>

Content syndication

With content syndication, data can be seamlessly moved between different Snow Owl Terminology Server deployments.

This functionality is useful when content created in a central deployment (upstream) needs to be distributed to one or more read-only downstream instances. The resource distribution is designed to be uni-directional and semi-automated where an actor has to configure any new downstream instances to be able to receive data from the central unit.

Configure upstream

To be able to access the upstream server and its content the following items are required:

  • the HTTP port of Elasticsearch has to be accessible for the downstream Snow Owl and Elasticsearch instances (configured via the http.port property, the default is 9200)

  • the REST API of Snow Owl has to be accessible for the downstream Snow Owl servers

  • an Elasticsearch API key with sufficient privileges for authentication and authorization

  • a Snow Owl API key with sufficient privileges for authentication and authorization

  • configure selected terminology resources as distributable

Access Elasticsearch

In case Snow Owl uses a self-hosted Elasticsearch instance the HTTP port can be opened by modifying the container settings in the docker-compose.yml file. Make sure to remove the localhost IP prefix from the port declaration:

docker-compose.yml
...
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:${ELASTICSEARCH_VERSION}
    container_name: elasticsearch
...
    ports:
-      - "127.0.0.1:9200:9200"
+      - "9200:9200"

When opening up a self-hosted Elasticsearch make sure to use strengthened security with secure HTTP and username/password access.

In the case of a hosted Elasticsearch instance there is nothing to do, it will already be accessible from outside.

Access Snow Owl

The default reverse proxy configuration (shipped in the released package) exposes the Snow Owl REST API via the URL: http(s)://upstream-snow-owl-url/snowowl

Other than that no additional configuration is needed.

Obtain an Elasticsearch API key

Creating a new API key for Elasticsearch is either possible through its Api Key API or - in the case of a hosted instance - from within Kibana.

The content syndication operation requires the following permissions:

  • cluster privilege: monitor

  • index privilege: read

Here is an example request body for the Api Key API:

POST /_security/api_key
{
  "name": "syndication-api-key",
  "expiration": "30d",
  "role_descriptors": { 
    "syndicate-role": {
      "cluster": [
        "monitor"
      ],
      "indices": [
        {
          "names": [
            "*"
          ],
          "privileges": [
            "read"
          ]
        }
      ]
    }
  }
}

This request will return with the following response:

{
  "id" : "<token_id>",
  "name" : "syndication-api-key",
  "expiration" : 0,
  "api_key" : "<api_key>",
  "encoded" : "<encoded_api_key>"
}

Take note of the encoded API Key, which is the one that will be used later on.

Obtain a Snow Owl API Key

To request an API key from the upstream Snow Owl Terminology Server the following REST API endpoint must be used:

To request an API key

POST https://upstream-snow-owl-url/snowowl/token

Request Body

Name
Type
Description

username*

String

The username to authenticate with

password*

String

The password belonging to the username

token

String

Previous token to re-new

expiration

String

Expiration interval, e.g. 1d or 2h

permissions

List<String>

List of permissions

{
    token: "<snow-owl-api-key>"
}

Select distributable resources

All three major terminology resource types can be configured as distributable. Resources have a settings map that can be updated via their specific REST API endpoints:

  • PUT /codesystems/{codeSystemId}

  • PUT /valuesets/{valueSetId}

  • PUT /conceptmaps/{conceptMapId}

A setting called distributable has to be set with a value of either true or false. Here is an example update request to make the 'Example Code System' distributable:

PUT /codesystems/example_codesystem_id
{
  "settings": {
    "distributable": true
  }
}

Configure downstream

Elasticsearch

There is one configuration property that must be set before provisioning a new downstream Snow Owl Terminology Server.

Any potential upstream Elasticsearch instance must be listed as an allowed source of information for the downstream Elasticsearch instances via a configuration parameter in the elasticsearch.yml file.

The property is called reindex.remote.whitelist :

elasticsearch.yml
...
http.port: 9200
...
reindex.remote.whitelist: ["upstream-elasticsearch-url.com:9200", "other-upstream-elasticsearch-url.com:9200"]

The whitelisted URL must contain the upstream HTTP port and must not contain the scheme.

Provision a new downstream server

Provisioning a new downstream server has the following prerequisites:

  • start with an empty dataset

  • collect all terminology resource identifiers that need to be syndicated

  • get all the necessary credentials to communicate with upstream

  • initiate the resource syndication and verify the result

Collect terminology resources for syndication

To populate a downstream server with terminology resources via an upstream source, one must collect the required resource identifiers or resource version identifiers beforehand.

Resource identifiers must be in their simple form, e.g.:

  • SNOMED-CT

  • ICD-10

  • LOINC

Resource version identifiers must be in the following form: <resource_id>/<version_id>, e.g.:

  • SNOMED-CT/2020-01-31

  • ICD-10/v2019

  • LOINC/v2.72

To determine which resources are available for syndication, the following upstream REST API endpoint can be used. It returns an atom feed that consists of resource versions from where one can collect the required identifiers.

Retrieve syndication resource feed

GET https://upstream-snow-owl-url/snowowl/syndication/feed.xml

Retrieves the feed of all distributable resources

Query Parameters

Name
Type
Description

resource

List<String>

The resource identifier(s) to include in the feed

resourceType

List<String>

The types of resources to include in the feed (e.g. conceptmaps, valuesets, codesystems)

resourceUrl

List<String>

The URLs of the resources to include in the feed

packageTypes

List<String>

The types of packages to include in the feed. Only BINARY is supported at the moment

effectiveTime

String

The effective time value to match (yyyyMMdd) or an effective time range value to match (yyyyMMdd...yyyyMMdd), inclusive range

createdAt

Long

Exact match filter for the resource version created at field

createdAtFrom

Long

Greater than equal to filter for the resource version created at field

createdAtTo

String

Less than equal to filter for the resource version created at field

limit*

int

The maximum number of items to return

<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <id>urn:uuid:ddce3cd6-2efe-3142-9cce-62e73d3031ca</id>
  <title>Snow Owl® Terminology Server Syndication Feed</title>
  ...
  <entry>
    <id>valuesets/1234/V1.0</id>
    ...
    <title>Valueset example</title>
    <category term="BINARY" scheme="https://b2ihealthcare.com/snow-owl/syndication/binary/1.0.0" label="Binary index"/>
    ...
  </entry>
</feed>

It is not required to list all resource version identifiers for an already selected resource. E.g.:

  • If SNOMED-CT is selected as a resource, it is not required to select all its versions among the version resource identifiers.

  • If a specific version is selected (SNOMED-CT/2020-01-31) and the resource is not listed among the selected resources, then only versions created until 2020-01-31 will be syndicated

Syndicate resources

To kick off a syndication process the following parameters are required:

  • the list of resource identifiers

  • the list of resource version identifiers

  • the upstream Snow Owl URL without its REST API root context:

    • e.g. https://upstream-snow-owl-url.com

  • the API key to authenticate with the upstream Snow Owl server

  • the upstream Elasticsearch URL, including the scheme and port:

    • e.g. https://upstream-elasticsearch-url.com:9200

  • the API key to authenticate with the upstream Elasticsearch

When there are no existing resources on the downstream server yet, at least one resource identifier or one resource version identifier must be selected.

Snow Owl will resolve all resource dependencies and will handle syndication requests rigorously. If e.g. a Value Set depends on a specific SNOMED CT version and that version is not among the selected resources - or does not exist on the downstream server yet - the syndication run will fail to note that there is a missing dependency. It is always required to list all dependencies that the selected resources have for a given syndication run.

The above parameters should be fed to the following downstream Snow Owl REST API endpoint:

Syndicate resource(s)

POST https://downstream-snow-owl-url/snowowl/syndication/syndicate

Syndicate resources from a remote Snow Owl instance. In case no resource identifiers are provided, all existing resources will be syndicated to their latest version.

Request Body

Name
Type
Description

resource

List<String>

List of resource identifiers

version

List<String>

List of version resource identifiers

upstreamUrl*

String

The URL of the upstream Snow Owl

upstreamToken*

String

API key for the upstream Snow Owl

upstreamDataUrl*

String

The URL of the upstream Elasticsearch

upstreamDataToken*

String

API key for the upstream Elasticsearch

The syndication process starts in the background as an asynchronous job. It can be tracked by calling the following endpoint using the job identifier returned in the Location:

Retrieve syndication job

GET https://downstream-snow-owl-url/snowowl/syndication/{id}

Returns the specified syndication run's configuration and status.

Path Parameters

Name
Type
Description

id*

String

The identifier of a syndication run

{
    // Response
}

The returned result object will contain all information related to the given syndication run:

  • status of the run (RUNNING, FINISHED, FAILED)

  • list of successfully syndicated resource versions

  • additional details about created or updated Elasticsearch indices

Examples of resource selection

Code Systems

There is a need to syndicate the SNOMED-CT US extension. It depends on the SNOMED CT International version 2021-01-31. Provide the following resource identifier and resource version identifier configuration:

{
  "resource": "SNOMED-CT-US",
  "version": "SNOMED-CT/2021-01-31"
}

This will syndicate all versions of SNOMED-CT-US and all international versions until 2021-01-31.

If the configuration is changed to:

{
  "resource": "SNOMED-CT-US, SNOMED-CT"
  "version": ""
}

This will syndicate all versions of SNOMED-CT-US and SNOMED-CT international, including all international versions even after 2021-01-31.

Value Sets

There is a Value Set with an identifier of VS and members from SNOMED-CT/2020-07-31:

{
  "resource": "VS"
  "version": "SNOMED-CT/2020-07-31"
}

Concept Maps

There is a Concept Map with an identifier of CM mapping concepts between LOINC/v2.72 and ICD-10/v2019:

{
  "resource": "CM"
  "version": "LOINC/v2.72, ICD-10/v2019"
}

Keeping a downstream server up-to-date

If a given downstream server already contains the desired resources and the goal is to keep the content up-to-date, it is not required to fill in the resource and resource version identifiers for the syndication request.

One can call the POST /syndication/syndicate endpoint with all the credentials and URLs but without specifying any resource or version identifier. The server will automatically determine - based on the set of existing downstream resources - if there are any new resource versions available for syndication.

To check whether there are any updates available, there is an endpoint that can be called:

Retrieve a list of resource versions which are available for syndication

GET https://downstream-snow-owl-url/snowowl/syndication/list

Returns the full list of resource versions to be syndicated based on the search criteria. If no filters are provided updates are calculated for all existing resources.

Query Parameters

Name
Type
Description

resource

List<String>

The resource identifier(s) to syndicate, e.g. SNOMEDCT (== latest version)

version

List<String>

The version identifier(s) to syndicate, e.g. SNOMEDCT/2022-07-31

upstreamUrl*

String

The URL of the upstream Snow Owl server

upstreamToken*

String

The token to authenticate with the upstream Snow Owl server

limit*

int

The number of resource versions to return if there are any

{
    "items": [
        {
            "id": "SNOMED-CT/2022-01-31",
            "version": "2022-01-31",
            "description": "2022-01-31",
            "effectiveTime": "2022-01-31",
            "resource": "codesystems/SNOMED-CT"
        },
        {
            "id": "SNOMED-CT/2022-07-31",
            "version": "2022-07-31",
            "description": "2022-07-31",
            "effectiveTime": "2022-07-31",
            "resource": "codesystems/SNOMED-CT"
        }
    ]
    "limit": 50,
    "total": 2
}

If there are any updates this endpoint will return a list of versions, if there are none it will return an empty result.

ValueSet

Introduction

Tooling support

SNOMED CT (implicit)

Value set URIs following SNOMED International's URI format are evaluated based on the associated SNOMED CT code system's content. The following URIs can be set as the url parameter of the request:

  • http://snomed.info/sct/900000000000207008?fhir_vs - all concepts of the International Edition (may include a version suffix as well).

The implicit value set URI for SNOMED CT code systems should always include a module identifier to avoid confusion.

  • http://snomed.info/sct/900000000000207008?fhir_vs=isa/409822003 - all concepts of the International Edition that are descendants of 409822003|Domain bacteria|

  • http://snomed.info/sct/900000000000207008?fhir_vs=refset - all reference set identifier concepts in the International Edition

  • http://snomed.info/sct/900000000000207008?fhir_vs=refset/733073007 - all concepts of the International Edition that are members of the reference set 733073007|OWL axiom reference set|

Regular value sets are only supported in the paid edition of Snow Owl.

Interactions

read (instance)

GET requests that include the value set identifier as the final path segment(s) return the resource state:

update (instance)

PUT requests that include a resource identifier will update an existing value set or create a new instance:

The response code is 201 Created if the resource did not exist previously, and the URL is included in the Location response header. Existing value sets (like in the example above) are updated and a 200 OK response is returned instead.

If an error occurs during the update, a 400 Bad Request response with an OperationOutcome resource as the response body is emitted instead.

The following non-standard request headers can be used to control certain aspects of the commit process:

  • X-Effective-Date -> the effective date to use if a version identifier is present in the resource without a corresponding date element

  • X-Author -> sets the user identifier that the commit should be attributed to (defaults to the authenticated user)

  • X-Owner -> sets the owner of the resource, for access control purposes in external systems (defaults to the author or the authenticated user if the former is not set)

  • X-Owner-Profile -> sets the human-readable name of the owner of the resource, for presentation purposes in external systems

  • X-Bundle-Id -> specifies the parent bundle's resource identifier (defaults to the root bundle if not set)

Value sets are currently limited to a single code system and version (domain) they can refer to when including or excluding concepts.

delete (instance)

A DELETE request removes an existing value set:

Successful removal of a resource results in a 204 No Content response.

Value sets that have been published can not be removed without adding the force=true query parameter to signal a forced deletion (this option is only available to administrators however). The example value set was never published and so can be deleted without this option.

create (type)

In create interactions a POST request is sent to the path corresponding to the resource type. Any identifier included in the request body is ignored and a new, random one is generated from scratch.

The response code is 201 Created if the interaction is successful. The request URL that can be used in eg. follow-up read interactions is included in the response header named Location.

search (type)

GET requests with a request path that points to the resource type returns all value sets that satisfy the specified search criteria, in the form of query parameters. The following example uses the count summary mode to determine the number of draft value sets in the system, without returning any of the matches:

Just as with code system resources, POST requests are unsupported in Snow Owl and will be met with a 405 Method Not Allowed response.

The following search parameters are supported:

  • _id -> matches value sets by logical identifier

  • name -> matches value sets by name (in Snow Owl this is set to the logical identifier)

  • title -> matches value sets by title (Snow Owl uses exact, phrase and prefix matching during its lexical search activities)

  • url -> matches value sets by their assigned url value

  • version -> matches value sets by their version value

  • status -> matches value sets by resource status (eg. draft, active, etc.)

Operations

$expand

Snow Owl supports the following input parameters for value set expansion:

  • url -> the URI of the value set to expand (can be an implicit or an explicit one)

  • valueSetVersion -> the version of the value set for use for the expansion

  • activeOnly -> to return only active codes in the response

  • filter -> to filter the results lexically

  • includeDesignations -> whether to include all designations or not in the returned response

  • displayLanguage -> to select the language for the returned display values

  • count -> to select the number of codes to be returned in the expansion (10 by default)

  • after -> state tracking parameter for concept set paging

The value set with expanded concepts is returned in entirety for this request. It includes a link that can be followed to retrieve the next page of expanded concepts:

Supplying a value set as part of the request (via the input parameter valueSet) is not supported – nor can additional resources be supplied for expansion via the unofficial

$validate-code

The operation is supported both on the instance level (in this case the value set is located by resource ID) as well as the type level (a canonical URL must be supplied in the url input parameter to identify the value set to use).

Codes can only be validated against persisted value sets, not implicit ones.

Encountering any of the following conditions will fail the code validation check:

  • The specified value set does not exist

  • The value set does not contain the specified code in its expansion

  • The code does not exist in the code system specified in the request (the corresponding parameter, system is mandatory in Snow Owl)

  • The specified code system version differs from the version referenced by the value set

The following example checks whether 429885007|Bar| satisfies the aforementioned conditions in the value set containing all basic dose forms we created earlier:

Then, import the classification content from a ClaML file (generated by B2i Healthcare, ):

An API compliant with the requirements listed in the FHIR R5 specification for terminology services:

A comprehensive set of examples is available in our Postman collection:

We also provide a Postman collection with pre-populated example requests to try:

- use fhirVersion=4.0.1

- use fhirVersion=4.3.0

- use fhirVersion=5.0.0

A detailed guide on Elasticsearch security can be found .

To obtain an API key using Kibana, follow with the same settings from above.

Snow Owl TS supports interactions and operations related to value sets, as described in the FHIR R5 specification. For certain toolings implicit value sets are also expandable; these are described below in detail.

Persisted value sets

The returned response uses a filter mentioned in that is supported by SNOMED CT – this enables including or excluding concepts using an ECL expression.

Similarly to CodeSystem read interactions, query parameters _format, _summary, _elements and _pretty are also applicable, see for a detailed description of these options.

request it here
https://hl7.org/fhir/R5/terminology-service.html
FHIR API
Native API
https://documenter.getpostman.com/view/16295366/2s93z3h6cP
https://documenter.getpostman.com/view/16295366/2s93z3h6cP
R4
R4B
R5
CodeSystem
ValueSet
ConceptMap
here
this guide
GET /snowowl/fhir/ValueSet/xJn9vXKMrkU9F

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "ValueSet",
  "id": "xJn9vXKMrkU9F",
  "meta": {
    "lastUpdated": "2023-11-30T10:22:59.716Z"
  },
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "https://b2ihealthcare.com/valuesets/xJn9vXKMrkU9F",
  "name": "xJn9vXKMrkU9F",
  "title": "Example Value Set",
  "status": "draft",
  "compose": {
    "include": [
      {
        "system": "http://snomed.info/sct/900000000000207008",
        "version": "2023-10-01",
        "filter": [
          {
            "property": "expression",
            "op": "=",
            "value": "<<448771007|Canis lupus subspecies familiaris (organism)|"
          }
        ]
      }
    ]
  }
}
PUT /snowowl/fhir/ValueSet/xJn9vXKMrkU9F

[Request headers]
X-Author: user@host.domain
Content-Type: application/fhir+json

[Request body]
{
  "resourceType": "ValueSet",
  "id": "xJn9vXKMrkU9F",
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "https://b2ihealthcare.com/valuesets/xJn9vXKMrkU9F",
  "name": "xJn9vXKMrkU9F",
  "title": "Example Value Set",
  "status": "draft",
  "compose": {
    "include": [
      {
        "system": "http://snomed.info/sct/900000000000207008",
        "version": "2023-10-01",
        "filter": [
          {
            "property": "expression",
            "op": "=",
            "value": "<<448771007|Canis lupus subspecies familiaris (organism)|"
          }
        ]
      },
      // Added inclusion
      {
        "system": "http://snomed.info/sct/900000000000207008",
        "version": "2023-10-01",
        "filter": [
          {
            "property": "expression",
            "op": "=",
            "value": "<<448169003|Felis catus (organism)|"
          }
        ]
      }
    ]
  }
}
DELETE /snowowl/fhir/ValueSet/xJn9vXKMrkU9F

[Request headers]
X-Author: user@host.domain

[Response]
204 No Content
POST /snowowl/fhir/ValueSet

[Request headers]
X-Effective-Date: 2023-11-29
X-Author: user@host.domain
X-Owner: owner@host.domain
X-Owner-Profile-Name: Resource Owner
X-Bundle-Id: parent-bundle-id
Content-Type: application/fhir+json

[Request body]
{
  "resourceType": "ValueSet",
  // "id": "..." is not used by the server
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "https://b2ihealthcare.com/valuesets/basic-dose-forms",
  "title": "Basic dose forms",
  "version": "v1",
  "status": "active",
  "compose": {
    "include": [ {
      "system": "http://snomed.info/sct/900000000000207008",
      "version": "2021-01-31",
      "filter": [ {
          "property": "expression",
          "op": "=",
          "value": "<736478001|Basic dose form (basic dose form)|"
      } ]
    } ]
  }
}

[Response]
201 Created

[Response headers]
Location: http://<host>/snowowl/fhir/ValueSet/vmfRt532iS
GET /snowowl/fhir/ValueSet?status=draft&_summary=count

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Bundle",
  "id": "valuesets",
  "meta": {
    "lastUpdated": "2023-11-30T14:29:15.724489Z"
  },
  "type": "searchset",
  "total": 188
}
GET /snowowl/fhir/ValueSet/$expand?url=https://b2ihealthcare.com/valuesets/basic-dose-forms&count=3

[Query parameters (repeated for clarity)]
url: https://b2ihealthcare.com/valuesets/basic-dose-forms
count: 3

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "ValueSet",
  "id": "vmfRt532iS",
  "meta": {
    "lastUpdated": "2023-11-30T10:22:59.716Z"
  },
  "url": "https://b2ihealthcare.com/valuesets/basic-dose-forms",
  "name": "vmfRt532iS",
  "title": "Basic dose forms",
  "status": "active",
  "compose": {
    "include": [
      {
        "system": "http://snomed.info/sct/900000000000207008",
        "version": "2021-01-31",
        "filter": [
          {
            "property": "expression",
            "op": "=",
            "value": "<736478001|Basic dose form (basic dose form)|"
          }
        ]
      }
    ]
  },
  // This element does not appear when the VS is requested in a read interaction
  "expansion": {
    "identifier": "vmfRt532iS",
    // Link to the next page in the expansion (includes the "_after" parameter)
    "next": "https://uat.snowray.app/snowowl/fhir/ValueSet/$expand?url=https://b2ihealthcare.com/valuesets/basic-dose-forms&displayLanguage=en-US;q=0.8,en-GB;q=0.6,en;q=0.4&count=3&after=AoEqMTIzMDIxNzAwNw==",
    "timestamp": "2023-11-30T13:07:32.254Z",
    "total": 71,
    // The number of results on a single page was limited to 3 by parameter "_count"
    "contains": [
      {
        "system": "codesystems/SNOMEDCT/2021-01-31",
        "code": "1230183009",
        "display": "Dispersion (basic dose form)"
      },
      {
        "system": "codesystems/SNOMEDCT/2021-01-31",
        "code": "1230206006",
        "display": "Compressed lozenge (basic dose form)"
      },
      {
        "system": "codesystems/SNOMEDCT/2021-01-31",
        "code": "1230217007",
        "display": "Molded lozenge (basic dose form)"
      }
    ]
  }
}
GET /snowowl/fhir/ValueSet/$expand?url=https://b2ihealthcare.com/valuesets/basic-dose-forms&system=http://snomed.info/sct/900000000000207008&code=429885007

[Query parameters (repeated for clarity)]
url: https://b2ihealthcare.com/valuesets/basic-dose-forms
system: http://snomed.info/sct/900000000000207008
code: 429885007

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Parameters",
  "parameter": [
    {
      "name": "result",
      "valueBoolean": true
    },
    {
      "name": "message",
      "valueString": "OK"
    },
    {
      "name": "display",
      "valueString": "Bar"
    }
  ]
}
Common request parameters

Content access

Content management

Filters

Resource management

terminology service

ConceptMap

The paid version of Snow Owl TS supports interactions and operations that target concept map resources.

Interactions

read (instance)

GET requests that include the identifier of the concept map return the resource's current state:

GET /snowowl/fhir/ConceptMap/69YWt6qc1ydgwARjh8XNw2

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "ConceptMap",
  "id": "69YWt6qc1ydgwARjh8XNw2",
  "meta": {
    "lastUpdated": "2023-11-30T16:36:31.653Z"
  },
  "language": "en",
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "https://b2ihealthcare.com/conceptmaps/example-concept-map",
  "name": "69YWt6qc1ydgwARjh8XNw2",
  "title": "Example Concept Map",
  "status": "draft",
  "description": "# Example Concept Map",
  "group": [
    {
      "source": "http://snomed.info/sct/900000000000207008|2023-10-01",
      "target": "https://b2ihealthcare.com/codesystems/example-lcs-1|v1",
      "element": [
        {
          "code": "103015000",
          "display": "Thoracic nerve root pain",
          "target": [
            {
              "code": "C00",
              "display": "Root concept",
              "relationship": "equivalent"
            }
          ]
        }
      ]
    }
  ]
}

Just as with CodeSystem or ValueSet read interactions, query parameters _format, _summary, _elements and _pretty are applicable. Common request parameters expands on these settings.

update (instance)

PUT requests that include a resource identifier will update an existing resource (or create a new one if it didn't exist earlier):

PUT /snowowl/fhir/ConceptMap/69YWt6qc1ydgwARjh8XNw2

[Request headers]
X-Author: user@host.domain
Content-Type: application/fhir+json

[Request body]
{
  "resourceType": "ConceptMap",
  "id": "69YWt6qc1ydgwARjh8XNw2",
  "meta": {
    "lastUpdated": "2023-11-30T16:36:31.653Z"
  },
  "language": "en",
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "https://b2ihealthcare.com/conceptmaps/example-concept-map",
  "name": "69YWt6qc1ydgwARjh8XNw2",
  "title": "Example Concept Map",
  "status": "draft",
  "description": "# Example Concept Map",
  "group": [
    {
      "source": "http://snomed.info/sct/900000000000207008|2023-10-01",
      "target": "https://b2ihealthcare.com/codesystems/example-lcs-1|v1",
      "element": [
        {
          "code": "103015000",
          "display": "Thoracic nerve root pain",
          "target": [
            {
              "code": "C00",
              "display": "Root concept",
              "relationship": "equivalent"
            }
          ]
        },
        // Added mapping
        {
          "code": "102506008",
          "display": "Well child (finding)",
          "target": [
            {
              "code": "C00-1",
              "display": "Child concept",
              "relationship": "equivalent"
            }
          ]
        }
      ]
    }
  ]
}

The response code is 201 Created if the resource did not exist previously, and the URL is included in the Location response header. Existing concept maps (like in the example above) are updated and a 200 OK response is returned instead.

If an error occurs during the update, a 400 Bad Request response with an OperationOutcome resource as the response body is emitted instead.

The following non-standard request headers can be used to control certain aspects of the commit process:

  • X-Effective-Date -> the effective date to use if a version identifier is present in the resource without a corresponding date element

  • X-Author -> sets the user identifier that the commit should be attributed to (defaults to the authenticated user)

  • X-Owner -> sets the owner of the resource, for access control purposes in external systems (defaults to the author or the authenticated user if the former is not set)

  • X-Owner-Profile -> sets the human-readable name of the owner of the resource, for presentation purposes in external systems

  • X-Bundle-Id -> specifies the parent bundle's resource identifier (defaults to the root bundle if not set)

Concept maps are currently limited to a single source and target code system and version (group) they can use to map concepts.

delete (instance)

A DELETE request removes an existing concept map:

DELETE /snowowl/fhir/ConceptMap/69YWt6qc1ydgwARjh8XNw2

[Request headers]
X-Author: user@host.domain

[Response]
204 No Content

Successful removal of a resource results in a 204 No Content response.

Concept maps that have already been versioned can not be removed without adding the force=true query parameter to signal a forced deletion (this option is only available to administrators however).

create (type)

In create interactions a POST request is sent to the path corresponding to the resource type. Any identifier included in the request body is ignored and a new, random one is generated from scratch.

POST /snowowl/fhir/ConceptMap

[Request headers]
X-Effective-Date: 2023-11-29
X-Author: user@host.domain
X-Owner: owner@host.domain
X-Owner-Profile-Name: Resource Owner
X-Bundle-Id: parent-bundle-id
Content-Type: application/fhir+json

[Request body]
{
  "resourceType": "ConceptMap",
  // "id": "..." is not used by the server
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "https://b2ihealthcare.com/conceptmaps/example-concept-map-2",
  "title": "Example Concept Map",
  "status": "draft",
  "group": [
    {
      "source": "http://snomed.info/sct/900000000000207008|2023-10-01",
      "target": "https://b2ihealthcare.com/codesystems/example-lcs-1|v1",
      "element": [
        {
          "code": "103015000",
          "display": "Thoracic nerve root pain (finding)",
          "target": [
            {
              "code": "C00",
              "display": "Root concept",
              "relationship": "equivalent"
            }
          ]
        }
      ]
    }
  ]
}



[Response]
201 Created

[Response headers]
Location: http://<host>/snowowl/fhir/ConceptMap/cndkDE31kfeXw8

The response code is 201 Created if the interaction is successful. The request URL that can be used in eg. follow-up read interactions is included in the response header named Location.

search (type)

GET requests with a request path that points to the resource type returns all concept maps that satisfy the specified search criteria, in the form of query parameters. The following example uses the count summary mode to determine the number of active concept maps in the system, without returning any of the matches:

GET /snowowl/fhir/ConceptMap?status=active&_summary=count

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Bundle",
  "id": "conceptmaps",
  "meta": {
    "lastUpdated": "2023-11-30T16:52:08.443803Z"
  },
  "type": "searchset",
  "total": 22
}

Just as with code system and value set resources, POST requests for search interactions are unsupported in Snow Owl and will result in a 405 Method Not Allowed response.

The following search parameters are supported:

  • _id -> matches concept maps by logical identifier

  • name -> matches concept maps by name (in Snow Owl this is set to the logical identifier)

  • title -> matches concept maps by title (Snow Owl uses exact, phrase and prefix matching during its lexical search activities)

  • url -> matches concept maps by their assigned url value

  • version -> matches concept maps by their version value

  • status -> matches concept maps by resource status (eg. draft, active, etc.)

Operations

$translate

Snow Owl TS uses the path parameter (and optionally, conceptMapVersion) to identify the concept map used for the operation. Supplying a concept map resource "inline" as input parameter conceptMap is not supported.

Parameters sourceScope and targetScope are also ignored.

The presence of a source* parameter (code, coding or codeable concept) implies that the translation needs to find "forward" (target) matches, while target* codes, codings or codeable concepts will run the translation in reverse.

An example translation request can be seen below:

GET /snowowl/fhir/ConceptMap/cndkDE31kfeXw8/$translate?system=http://snomed.info/sct/900000000000207008&code=103015000

[Query parametes (repeated for clarity)]
system: http://snomed.info/sct/900000000000207008
code: 103015000

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Parameters",
  "parameter": [
    {
      "name": "result",
      "valueBoolean": true
    },
    {
      "name": "message",
      "valueString": "1 member(s) from concept map: cndkDE31kfeXw8"
    },
    {
      "name": "match",
      "part": [
        {
          "name": "relationship",
          "valueCode": "equivalent"
        },
        {
          "name": "concept",
          "valueCoding": {
            "system": "codesystems/example-lcs-1",
            "code": "C00",
            "display": "Root concept"
          }
        },
        {
          "name": "originMap",
          "valueUri": "cndkDE31kfeXw8"
        }
      ]
    }
  ]
}

Native API

This section describes common information about all available native APIs.

Media Types

Custom media types are used in the API to let consumers choose the format of the data they wish to receive. This is done by adding one of the following types to the Accept header when you make a request. Media types are specific to resources, allowing them to change independently and support formats that other resources don’t.

The most basic media types the API supports are:

  1. application/json (default)

  2. text/plain;charset=UTF-8

  3. text/csv;charset=UTF-8

  4. application/octet-stream (for file downloads)

  5. multipart/form-data (for file uploads)

We encourage you to explicitly set the accepted content type before sending your request.

Schema

All data is sent and received as JSON. Blank fields are omitted instead of being included as null.

Timestamps use the ISO 8601 format:

YYYY-MM-DDTHH:MM:SSZ

Effective time values used in SNOMED CT (and other terminology content as well) are sent and received in short format:

yyyyMMdd

Hypermedia

An example Location header looks like this:

http://example.com/snowowl/snomedct/SNOMEDCT/concepts/123456789

Pagination

Requests that return multiple items will be paginated to 50 items by default. You can request further pages with the searchAfter query parameter.

Resource expansion

Where applicable, the expand query parameter will include nested objects in the response, to avoid having to issue multiple requests to the server.

Expanded properties should be followed by parentheses and separated by commas; any options for the expanded property should be given within the parentheses, including properties to expand. Typical values for parameters are given in the "Implementation Notes" section of each endpoint.

GET /snowowl/snomedct/SNOMEDCT/concepts?limit=50&expand=fsn(),descriptions()

Response:

{
  "items": [
    {
      "id": "100005",
      "released": true,
      ...
      "fsn": {
        "id": "2709997016",
        "term": "SNOMED RT Concept (special concept)",
        ...
      },
      "descriptions": {
        "items": [
          {
            "id": "208187016",
            "released": true,
            ...
          },
        ],
        "offset": 0,
        "limit": 5,
        "total": 5
      }
    },
    ...
  ],
  "offset": 0,
  "limit": 50,
  "total": 421657
}

Client Errors

There are three possible types of client errors on API calls that receive request bodies:

Invalid JSON

// 400 Bad Request
{
  "status" : "400",
  "message" : "Invalid JSON representation",
  "developerMessage" : "detailed information about the error for developers"
}

Valid JSON but invalid representation

// 400 Bad Request 
{
  "status" : "400",
  "message" : "2 Validation errors",
  "developerMessage" : "Input representation syntax or validation errors. Check input values.",
  "violations" : ["violation_message_1", "violation_message_2"]
}

Conflicts

// 409 Conflict 
{
  "status" : "409",
  "message" : "Cannot merge source 'branch1' into target 'MAIN'."
}

Server Errors

In certain circumstances, Snow Owl might fail to process and respond to a request and responds with a 500 Internal Server Error.

// 500 Internal Server Error 
{
  "status" : "500",
  "message" : "Something went wrong during the processing of your request.",
  "developerMessage" : "detailed information about the error for developers"
}

Path parameters

Snow Owl is a revision-based terminology server, where terminology data (concepts, descriptions, etc.) is stored in multiple revisions across multiple branches. When requesting content from the terminology server, clients can specify a path value or expression to select the content they'd like to access and receive.

For example, Snow Owl supports importing SNOMED CT content from different sources, allowing eg. multiple national Extensions to co-exist with the base International Edition provided by SNOMED International.

Versioned editions can be consulted when non-current representations of concepts need to be accessed. Concept authoring and review can also be done in isolation. Both Java and REST API endpoints require a path parameter to select the content (or substrate) the user wishes to work with.

The following formats are accepted:

Absolute branch path

Absolute branch path parameters start with MAIN and point to a branch in the backing terminology repository. In the following example, all concepts are considered to be part of the substrate that are on branch MAIN/2021-01-31/SNOMEDCT-UK-CL or any ancestor (ie. MAIN or MAIN/2021-01-31), unless they have been modified:

GET /snomedct/MAIN/2021-01-31/SNOMEDCT-UK-CL/concepts
{
  "items": [
    {
      "id": "100000000",
      "released": true,
      "active": false,
      "effectiveTime": "20090731",
[...]

Relative branch path

Relative branch paths start with a short name identifying a SNOMED CT code system, and are relative to the code system's working branch. For example, if the working branch of code system SNOMEDCT-UK-CL is configured to MAIN/2021-01-31/SNOMEDCT-UK-CL, concepts visible on authoring task #100 can be retrieved using the following request:

GET /snomedct/SNOMEDCT-UK-CL/100/concepts

An alternative request that uses an absolute path would be the following:

GET /snomedct/MAIN/2021-01-31/SNOMEDCT-UK-CL/100/concepts

An important difference is that the relative path parameter tracks the working branch specified in the code system's settings, so requests using relative paths do not need to be adjusted when a code system is upgraded to a more recent International Edition.

Path range

The substrate represented by a path range consists of concepts that were created or modified between a starting and ending point, each identified by an absolute branch path (relative paths are not supported). The format of a path range is fromPath...toPath.

To retrieve concepts authored or edited following version 2020-08-05 of code system SNOMEDCT-UK-CL, the following path expression should be used:

GET /snomedct/MAIN/2019-07-31/SNOMEDCT-UK-CL/2020-08-05...MAIN/2021-01-31/SNOMEDCT-UK-CL/concepts

The result set includes the ones appearing or changing between versions 2019-07-31 and 2021-01-31 of the International Edition; if this is not desired, additional constraints can be added to exclude them.

Path with timestamp

To refer to a branch state at a specific point in time, use the path@timestamp format. The timestamp is an integer value expressing the number of milliseconds since the UNIX epoch, 1970-01-01 00:00:00 UTC, and corresponds to "wall clock" time, not component effective time. As an example, if the SNOMED CT International version 2021-07-31 is imported on 2021-09-01 13:50:00 UTC, the following request to retrieve concepts will not include any new or changed concepts appearing in this release:

GET /snomedct/MAIN@1630504199999/concepts

Both absolute and relative paths are supported in the path part of the expression.

Branch base point

Concept requests using a branch base point reflect the state of the branch at its beginning before any changes on it were made. The format of a base path is path^ (only absolute paths are supported):

GET /snomedct/MAIN/2019-07-31/SNOMEDCT-UK-CL/101^/concepts

Returned concepts include all additions and modifications made on SNOMEDCT-UK-CL's working branch, up to the point where task #101 starts; neither changes committed to the working branch after task #101, nor changes on task #101 itself are reflected in the result set.

SNOMED CT API

This describes the resources that make up the official Snow Owl® SNOMED CT Terminology API.

Available resources and services

CodeSystem

Tooling support

Snow Owl TS differentiates between certain "families" (toolings) of code system resources. Each tooling uses an internal representation for its terminology components that can be searched and edited more effectively. These components are converted to a FHIR-compatible form when a read or search interaction happens, and back when new a CodeSystem resource is created via the FHIR API (or an existing resource receives an update).

SNOMED CT

SNOMED CT and its extensions are effectively read-only from the FHIR API's point of view – they can not be created via a create interaction, nor can content be loaded or updated from an RF2 archive. Only Snow Owl's native API has provisions for doing so.

As SNOMED CT code systems contain a considerable amount of concepts, resource responses for this tooling do not include a concept array and content is always set to not-present to highlight this fact:

GET /snowowl/fhir/CodeSystem/SNOMEDCT-US?_pretty=true

[Response headers]
Content-Type: application/fhir+xml

<CodeSystem xmlns="http://hl7.org/fhir">
  <id value="SNOMEDCT-US"/>
  <meta>
    <lastUpdated value="2023-10-18T01:52:16.04Z"/>
  </meta>
  <language value="en"/>
  ...
  <url value="http://snomed.info/sct/731000124108"/>
  ...
  <name value="SNOMEDCT-US"/>
  <title value="SNOMED CT US Extension"/>
  <status value="active"/>
  <publisher value="NIH - National Library of Medicine"/>
  ...  
  <content value="not-present"/>
  <count value="513765"/>
  ...

All SNOMED CT resources behave like editions, which means lookup operations succeed for any concept that can be found within eg. the International Edition's content, or in any other extension the resource depends on:

GET /snowowl/fhir/CodeSystem/$lookup?system=http://snomed.info/sct/731000124108&code=138875005&_pretty=true

[Query parameters (repeated for clarity)]
system: http://snomed.info/sct/731000124108
code: 138875005
_pretty: true

[Response headers]
Content-Type: application/fhir+xml

<Parameters xmlns="http://hl7.org/fhir">
  <parameter>
    <name value="name"/>
    <valueString value="SNOMEDCT-US"/>
  </parameter>
  <parameter>
    <name value="display"/>
    <valueString value="SNOMED CT Concept"/>
  </parameter>
</Parameters>

Filters

The following filters are supported in value set compose statements, should they reference a SNOMED CT code system:

  • code: expression operator: = value: <ECL expression>

    • matches concepts using the Expression Constraint Language

  • code: expressions operator: = value: <true|false>

    • specifies whether Post-Coordinated Expressions should be included in the filtered result set or not (even though the filter is recognized, Snow Owl currently does not store PCEs)

  • code: concept operator: is-a value: <conceptId>

    • matches the descendants of the specified parent concept

  • code: concept operator: in value: <refsetId>

    • matches concepts that are members of the specified reference set

Properties

All SNOMED CT relationship types are supported and can be returned along with concept data if requested. An example can be seen below:

  • code: 288556008 (the attribute concept's ID) type: code URI: http://snomed.info/id/288556008 description: Before

In addition to these (and the common properties that are applicable to all code systems), Snow Owl can also return the following set of SNOMED CT-specific concept properties:

  • code: inactive type: boolean URI: http://snomed.info/field/Concept.inactive

    • The concept's active RF2 property (inverted)

  • code: moduleId type: code URI: http://snomed.info/field/Concept.moduleId

    • The concept's moduleId RF2 property

  • code: effectiveTime type: string URI: http://snomed.info/field/Concept.effectiveTime

    • The concept's effectiveTime RF2 property (in yyyyMMdd format)

  • code: sufficientlyDefined type: boolean URI: http://snomed.info/field/Concept.sufficientlyDefined

    • A boolean value derived from the concept's definitionStatusId (set to true if the original value is 900000000000073002|Defined|, false otherwise)

ICD-10 and its national variants can be imported from XML release files using the ClaML format in the paid edition of Snow Owl, however this functionality is not exposed via FHIR requests; neither can such a code system be created using a create interaction.

Similarly to SNOMED CT resources, the concept array remains unpopulated with content set to not-present in the response. ICD-10 concepts can still participate in lookup, validation or subsumption testing operations.

In the paid edition the official release archive can be used to populate the code system's contents. It is available over the FHIR API in a read-only fashion as the previous two toolings.

In Snow Owl's paid edition FHIR code system resources submitted as part of a create or update interaction (if no resource existed with the same ID earlier) become members of the LCS tooling. Custom properties may be defined when submitting the creation request and added to concepts of the code system.

The code system URL can be set upon creation and is checked for uniqueness.

The concept array is fully populated in LCS responses and content is set to complete.

Properties

User-defined properties listed in the top-level property elements of the resource get included in the LCS schema. The cardinality is set to [0..*], which means concepts can have any number of properties associated with the same type code.

Interactions

read (instance)

Standard GET requests that include the identifier as the final path segment(s) return the code system's current (or versioned) state:

GET /snowowl/fhir/CodeSystem/SNOMEDCT-UK-CL/2023-08-02

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "CodeSystem",
  "id": "SNOMEDCT-UK-CL/2023-08-02",
  "meta": {
    "lastUpdated": "2023-10-17T15:44:27.568Z"
  },
  "language": "en",
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "http://snomed.info/sct/999000011000000103",
  ...  
  "name": "SNOMEDCT-UK-CL/2023-08-02",
  "title": "SNOMED CT UK Clinical Extension",
  "status": "active",
  "date": "2023-08-02T00:00:00Z", // Versioned resources include the "date" property
  "publisher": "NHS England",
  "contact": [ {
    "telecom": [ {
      "system": "url",
      "value": "https://www.england.nhs.uk/"
    } ]
  } ],
  "description": "SNOMED CT UK Clinical Extension",
  ...
}

Common parameters like _format, _summary, _elements and _pretty are also applicable. These are described on the previous page: Common request parameters

update (instance)

PUT requests that include a resource identifier update an existing (local) code system or create a new instance:

PUT /snowowl/fhir/CodeSystem/example-lcs-1

[Request headers]
X-Effective-Date: 2023-11-29
X-Author: user@host.domain
X-Owner: owner@host.domain
X-Owner-Profile-Name: Resource Owner
X-Bundle-Id: parent-bundle-id
Content-Type: application/fhir+json

[Request body]
{
  "resourceType": "CodeSystem",
  "id": "example-lcs-1",
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "https://b2ihealthcare.com/codesystems/example-lcs-1",
  "name": "example-lcs-1",
  "version": "v1",
  "title": "Example LCS",
  "status": "draft",
  "content": "complete",
  "count": 2,
  "concept": [
    {
      "code": "C00",
      "display": "Parent concept"
    },
    {
      "code": "C00-1",
      "display": "Child concept",
      "property": [ {
          "code": "parent",
          "valueCode": "C00"
      } ]
    }
  ]
}

The response code is 201 Created if the resource did not exist previously, and the URL is included in the Location response header. Existing code systems are updated and a 200 OK response is returned instead.

If an error occurs during the update, a 400 Bad Request response with an OperationOutcome resource as the response body is emitted instead.

The following non-standard request headers can be used to control certain aspects of the commit process:

  • X-Effective-Date -> the effective date to use if a version identifier is present in the resource without a corresponding date element

  • X-Author -> sets the user identifier that the commit should be attributed to (defaults to the authenticated user)

  • X-Owner -> sets the owner of the resource, for access control purposes in external systems (defaults to the author or the authenticated user if the former is not set)

  • X-Owner-Profile -> sets the human-readable name of the owner of the resource, for presentation purposes in external systems

  • X-Bundle-Id -> specifies the parent bundle's resource identifier (defaults to the root bundle if not set)

delete (instance)

A DELETE request removes an existing code system:

DELETE /snowowl/fhir/CodeSystem/example-lcs-1?force=true

[Request headers]
X-Author: user@host.domain

[Response]
204 No Content

Successful removal of a code system resource results in a 204 No Content response. Code systems that have been published can not be removed without adding the force=true query parameter to signal a forced deletion – this option in turn is only available to administrators.

create (type)

POST requests are very similar to the instance-level update interactions with the following important difference: the identifier included in the request body is ignored and a new, random one is generated from scratch. The request path should also omit the path segments corresponding to the resource identifier:

POST /snowowl/fhir/CodeSystem

[Request headers]
X-Effective-Date: 2023-11-29
X-Author: user@host.domain
X-Owner: owner@host.domain
X-Owner-Profile-Name: Resource Owner
X-Bundle-Id: parent-bundle-id
Content-Type: application/fhir+json

[Request body]
{
  "resourceType": "CodeSystem",
  // "id": "..." is not used by the server
  "text": {
    "status": "empty",
    "div": "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"
  },
  "url": "https://b2ihealthcare.com/codesystems/example-lcs-1",
  "name": "example-lcs-1",
  "version": "v1",
  "title": "Example LCS",
  "status": "active",
  "content": "complete",
  "count": 2,
  "concept": [
    {
      "code": "C00",
      "display": "Parent concept"
    },
    {
      "code": "C00-1",
      "display": "Child concept",
      "property": [ {
          "code": "parent",
          "valueCode": "C00"
      } ]
    }
  ]
}

[Response]
201 Created

[Response headers]
Location: http://<host>/snowowl/fhir/CodeSystem/ExWn1g2gdIIQ

The response code is 201 Created if the interaction is successful. As mentioned above, the resource URL that can be used in eg. follow-up read interactions is included in the Location response header.

search (type)

GET requests targeting the endpoint corresponding to the resource type return all code systems that satisfy the specified search criteria, in the form of query parameters. The following example uses the count summary mode to determine the total number of code systems registered in the system, without returning any of the matches:

GET /snowowl/fhir/CodeSystem?_summary=count

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Bundle",
  "id": "codesystems",
  "meta": {
    "lastUpdated": "2023-11-29T16:15:36.187124Z"
  },
  "type": "searchset",
  "total": 1685
}

The specification allows search interactions to be initiated via a POST request to /fhir/CodeSystem/_search using name=value parameter pairs encoded with MIME type x-www-form-urlencoded, however this is unsupported in Snow Owl and results in a 405 Method Not Allowed response.

The following search parameters are supported:

  • _id -> matches code systems by logical identifier

  • name -> matches code systems by name (in Snow Owl this is set to the logical identifier)

  • title -> matches code systems by title (Snow Owl uses exact, phrase and prefix matching during its lexical search activities)

  • url -> matches code systems by their assigned url value

  • version -> matches code systems by their version value

  • status -> matches code systems by resource status (eg. draft, active, etc.)

Operations

$lookup

Both GET as well as POST HTTP methods are supported. Concepts are queried based on code, version, system or Coding. Designations are included as part of the response as well as supported concept properties when requested. Date parameters are not supported.

The following example request retrieves details about the SNOMED CT concept 128927009|Procedure by method|, including properties "inactive" and "Method" (a SNOMED CT attribute), using the latest version of the code system:

GET /snowowl/fhir/CodeSystem/$lookup?system=http://snomed.info/sct&code=128927009&property=inactive&property=http://snomed.info/id/260686004

[Query parameters (repeated for clarity)]
system: http://snomed.info/sct
code: 128927009
property: inactive
property: http://snomed.info/id/260686004

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Parameters",
  "parameter": [
    {
      "name": "name",
      "valueString": "SNOMEDCT" // The code system name (resolved from the URL)
    },
    {
      "name": "display",
      "valueString": "Procedure by method" // The concept's display name
    },
    {
      "name": "property",
      "part": [
        {
          "name": "code",
          "valueCode": "inactive"
        },
        {
          "name": "value",
          "valueBoolean": false // The value for the concept property "inactive"
        },
        {
          "name": "description",
          "valueString": "inactive"
        }
      ]
    },
    {
      "name": "property",
      "part": [
        {
          "name": "code",
          "valueCode": "260686004" // The SCTID of the attribute "Method"
        },
        {
          "name": "value",
          "valueCode": "129264002" // The SCTID of the destination concept, "Action"
        }
      ]
    }
  ]
}

$validate-code

Both GET as well as POST HTTP methods are supported.

The example request below validates that 128927009|Procedure by method| is present in SNOMED CT, selecting the resource to use for validation using a versioned identifier (an instance-level invocation):

GET /snowowl/fhir/CodeSystem/SNOMEDCT/2021-07-31/$validate-code?code=128927009

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Parameters",
  "parameter": [
    {
      "name": "result",
      "valueBoolean": true
    }
  ]
}

$subsumes

Both GET as well as POST HTTP methods are supported.

Subsumption testing is supported for all terminologies, including SNOMED CT. The example uses version 2022-02-28 of code system SNOMED CT (via the URL in the system query parameter) to determine whether 409822003|Domain Bacteria| is an ancestor of 112283007|Escherichia coli| (type-level invocation):

GET /snowowl/fhir/CodeSystem/$subsumes?codeA=409822003&codeB=112283007&system=http://snomed.info/sct/900000000000207008/version/20220228

[Query parameters (repeated for clarity)]
codeA: 409822003
codeB: 112283007
system: http://snomed.info/sct/900000000000207008/version/20220228

[Response headers]
Content-Type: application/fhir+json

{
  "resourceType": "Parameters",
  "parameter": [
    {
      "name": "outcome",
      "valueCode": "subsumes"
    }
  ]
}

The response is positive (and encouraging). Changing the role of the two codes gives us a subsumed-by result instead.

Concepts

Introduction

SNOMED CT concepts represent ideas that are relevant in a clinical setting and have a unique concept identifier (a SNOMED CT identifier or SCTID for short) assigned to them. The terminology covers a wide set of domains and includes concepts that represent parts of the human body, clinical findings, medicinal products and devices, among many others. SCTIDs make it easy to refer unambiguously to the described ideas in eg. an Electronic Health Record or prescription, while SNOMED CT's highly connected nature allows complex analytics to be performed on aggregated data.

The three component types mentioned above (also called core components) have a distinct set of attributes which together form the concept's definition. As an example, each concept includes an attribute (the definition status) which states whether the definition is sufficiently defined (and so can be computationally processed), or relies on a (human) reader to come up with the correct meaning based on the associated descriptions.

Terminology services exposed by Snow Owl allows clients to create, retrieve, modify or remove concepts from a SNOMED CT code system (concepts that are considered to be already published to consumers can only be removed with an administrative operation). Concepts can be retrieved by SCTID or description search terms; results can be further constrained via Expression Constraint Language (ECL for short) expressions.

Resource format

A concept resource without any expanded properties looks like the following:

Properties

  • id

  • effectiveTime

  • active

  • moduleId

  • definitionStatusId

It also contains the following supplementary information:

  • parentIds, ancestorIds

These arrays hold a set of SCTIDs representing the concept's direct and indirect ancestors in the inferred taxonomy. The (direct) parents array contains all destinationIds from active and inferred IS A relationships where the sourceId matches this concept's SCTID, while the ancestor array contains all SCTIDs taken from the parent and ancestor array of direct parents. The arrays are sorted by SCTID. A value of -1 means that the concept is a root concept that does not have any concepts defined as its parent. Typically, this only applies to 138875005|Snomed CT Concept| in SNOMED CT content.

See the following example response for a concept placed deeper in the tree:

Compare the output with a rendering from a user interface, where the concept appears in two different places after exploring alternative routes in the hierarchy. Parents are marked with blue, while ancestors are highlighted with orange:

  • statedParentIds, statedAncestorIds

Same as the above, but for the stated taxonomy view.

  • released

A boolean value indicating whether this concept was part of at least one SNOMED CT release. New concepts start with a value of false, which is set to true as part of the code system versioning process. Released concepts can only be deleted by an administrator.

  • iconId

administration_method, assessment_scale, attribute, basic_dose_form, body_structure, cell, cell_structure, clinical_drug, disorder, disposition, dose_form, environment, environment_location, ethnic_group, event, finding, geographic_location, inactive_concept, intended_site, life_style, link_assertion, linkage_concept, medicinal_product, medicinal_product_form, metadata, morphologic_abnormality, namespace_concept, navigational_concept, observable_entity, occupation, organism, owl_metadata_concept, person, physical_force, physical_object, procedure, product, product_name, qualifier_value, racial_group, record_artifact, regime_therapy, release_characteristic, religion_philosophy, role, situation, snomed_rt_ctv3, social_concept, special_concept, specimen, staging_scale, state_of_matter, substance, supplier, transformation, tumor_staging, unit_of_presentation

In the metadata hierarchy, the use of a hierarchy tag alone would not distinguish concepts finely enough, as lots of them will have eg. "foundation metadata concept" set as their tag. In these cases, concept identifiers may be used as the icon identifier.

  • subclassDefinitionStatus

The default value is NON_DISJOINT_SUBCLASSES where no such assumption is made.

Property expansion

Core component information related to the current concept can be attached to the response by using the expand query parameter, allowing clients to retrieve more data in a single roundtrip. Property expansion runs the necessary requests internally, and attaches results to the original response object.

Expand options are expected to appear in the form of propertyName1(option1: value1, option2: value2, expand(...)), propertyName2() where:

  • propertyNameN stands for the property to expand;

  • optionN: valueN are key-value pairs providing additional filtering for the expanded property;

  • optionally, expands can be nested, and the options will apply to the components returned under the parent property;

  • when no expand options are given, an empty set of () parentheses need to be added after the property name.

Supported expandable property names are:

referenceSet()

If a corresponding reference set was already created for an identifier concept (a subtype of 900000000000455006|Reference set), information about the reference set will appear in the response:

To retrieve reference set members along with the reference set in a single request, use a nested expand property named members:

preferredDescriptions()

Expands descriptions with preferred acceptability.

Returns all active descriptions that have at least one active language reference set member with an acceptabilityId of 900000000000548007|Preferred|, in compact form, along with the concept. Preferred descriptions are frequently used on UIs when a display label is required for a concept.

This information is also returned when expand options pt() or fsn() (described later) are present.

semanticTags()

Returns hierarchy tags extracted from FSNs.

An array containing the hierarchy tags from all Fully Specified Name-typed descriptions of the concept is added as an expanded property if this option is present:

inactivationProperties()

Collects information from concept inactivation indicator and historical association reference set members referencing this concept.

Members of 900000000000489007|Concept inactivation indicator attribute value reference set| and subtypes of 900000000000522004 |Historical association reference set| hold information about a reason a concept is being retired in a release, as well as suggest potential replacement(s) for future use.

The concept stating the reason for inactivation is placed under inactivationProperties.inactivationIndicator.id (a short-hand property exists without an extra nesting, named inactivationProperties.inactivationIndicatorId). It is expected that only a single active inactivation indicator exists for an inactive concept.

Historical associations are returned under the property inactivationProperties.associationTargets as an array of objects. Each object includes the identifier of the historical association reference set and the target component identifier, in the same manner as described above – as an object with a single id property and as a string value.

While most object values where a single id key is present indicate that the property can be expanded to a full resource representation, this is currently not supported for inactivation properties; an expand option of inactivationProperties(expand(inactivationIndicator())) will not retrieve additional data for the indicator concept.

members()

Expands reference set members referencing this concept.

Note that this is different from reference set member expansion on a reference set, ie. referenceSet(expand(members())), as this option will return reference set members where the referencedComponentId property matches the concept SCTID, from multiple reference sets (if permitted by other expand options). Inactivation and historical association members can also be returned here, in their entirety (as opposed to the summarized form described in inactivationProperties() above).

Compare the output with the one returned when inactivation indicators were expanded. The last two reference set members correspond to the historical association and the inactivation reason, respectively:

The following expand options are supported within members(...):

  • active: true | false

Controls whether only active or inactive reference set members should be returned.

  • refSetType: "{type}" | [ "{type}"(,"{type}")* ]

The reference set type(s) as a string, to be included in the expanded output; when multiple types are accepted, values must be enclosed in square brackets and separated by a comma.

  • expand(...)

Allows nested expansion of reference set member properties.

  • SIMPLE - simple type

  • SIMPLE_MAP - simple map type

  • LANGUAGE - language type

  • ATTRIBUTE_VALUE - attribute-value type

  • QUERY - query specification type

  • COMPLEX_MAP - complex map type

  • DESCRIPTION_TYPE - description type

  • CONCRETE_DATA_TYPE - concrete data type (vendor extension for representing concrete values in Snow Owl)

  • ASSOCIATION - association type

  • MODULE_DEPENDENCY - module dependency type

  • EXTENDED_MAP - extended map type

  • SIMPLE_MAP_WITH_DESCRIPTION - simple map type with map target description (vendor extension for storing a descriptive label with map targets, suitable for display)

  • OWL_AXIOM - OWL axiom type

  • OWL_ONTOLOGY - OWL ontology declaration type

  • MRCM_DOMAIN - MRCM domain type

  • MRCM_ATTRIBUTE_DOMAIN - MRCM attribute domain type

  • MRCM_ATTRIBUTE_RANGE - MRCM attribute range type

  • MRCM_MODULE_SCOPE - MRCM module scope type

  • ANNOTATION - annotation type

  • COMPLEX_BLOCK_MAP - complex map with map block type (added for national extension support)

See the following example for combining reference set member status filtering and reference set type restriction:

module()

Expands the concept's module identified by property moduleId, and places it under the property module. As the returned resource is a concept itself, property expansion can apply to modules as well by using a nested expand() option.

Property module does not appear in compact form (with a single id key) in the standard representation.

definitionStatus()

Expands the definition status concept identified by the property definitionStatusId, and places it under the property definitionStatus. When this property is not expanded, a smaller placeholder object with a single id property is returned in the response. Nested expand() options work the same way as in the case of module().

pt() and fsn()

In addition to the standard locales like en-US, Snow Owl uses an extension to allow referring to language reference sets by identifier, in the form of {language code}-x-{language reference set ID}. "Traditional" language tags are resolved to language reference set IDs as part of executing the request by consulting the code system settings:

An example response pair demonstrating cases where the PT is different in certain dialects:

descriptions()

The collection resource's limit and total values are set to the same value (the number of descriptions returned for the concept) because a description fetch limit can not be set via a property expand option.

The following expand options are supported within descriptions(...):

  • active: true | false

Controls whether only active or inactive descriptions should be included in the response. (If both are required, do not set any value for this expand property.)

  • typeId: "{expression}"

  • sort: "{field}(:{asc | desc})?"(, "{field}(:{asc | desc})")*

Items in the collection resource are sorted based on the sort configuration given in this option. A single, comma-separated string value is expected; field names and sort order must be separated by a colon (:) character. When no sort order is given, ascending order (asc) is assumed.

  • expand(...)

Allows nested expansion of description properties.

relationships()

limit and total values on relationships are set to the same value (the number of relationships returned for the concept) because a relationship fetch limit can not be set via an expand option.

The following expand options are supported within relationships(...):

  • active: true | false

Controls whether only active or inactive relationships should be included in the response. (If both are required, do not set any value for this expand property.)

  • characteristicTypeId: "{expression}"

An ECL expression that restricts the characteristicTypeId property of each returned relationship. As an example, when this value is set to "<<900000000000006009", both stated and inferred relationships will be returned, as their characteristic type concepts are descendants of 900000000000006009|Defining relationship|.

  • typeId: "{expression}"

An ECL expression that restricts the typeId property of each returned relationship.

  • destinationId: "{expression}"

An ECL expression that restricts the destinationId property of each returned relationship.

  • sort: "{field}(:{asc | desc})?"(, "{field}(:{asc | desc})")*

Items in the collection resource are sorted based on the sort configuration given in this option. A single, comma-separated string value is expected; field names and sort order must be separated by a colon (:) character. When no sort order is given, ascending order (asc) is assumed.

  • expand(...)

Allows nested expansion of relationship properties.

inboundRelationships()

Retrieves all "inbound" relationships, where the destinationId property matches the SCTID of the concept(s), adding them to property inboundRelationships.

limit and total values on inboundRelationships are set to the same value (the number of inbound relationships returned for the concept), but differently from options above, a fetch limit is applied when it is specified.

  • destinationId: "{expression}"

This option is not supported on inboundRelationships; all destination IDs match the concept's SCTID.

  • sourceId: "{expression}"

An ECL expression that restricts the sourceId property of each returned relationship.

  • limit: {limit}

Limits the maximum number of inbound relationships to be returned. Not recommended for use when the expand option applies to a collection of concepts, not just a single one, as the limit is not applied individually for each concept.

descendants() / statedDescendants()

Depending on which direct setting is used, retrieves all concepts whose [stated]parentIds and/or [stated]AncestorIds array contains this concept's SCTID. Results are added to property descendants or statedDescendants, based on the option name used.

Only active concepts are returned, as these are expected to have active "IS A" relationships or OWL axioms that describe the relative position of the concept within the terminology graph.

The following options are available:

  • direct: true | false (required)

Controls whether only direct descendants should be collected or a transitive closure of concept subtypes.

When set to true, property [stated]parentIds will be searched only, otherwise both [stated]parentIds and [stated]AncestorIds are used. The presence or absence of the "stated" prefix in the search field depends on the option name.

  • limit: 0

Applicable only when a single concept's properties are expanded. Collects the number of descendants in an efficient manner, and sets the total property of the returned collection resource without including any concepts in it. Not used when a collection of concepts are expanded in a single request, or any other value is given.

  • expand(...)

Allows nested expansion of concept properties on each collected descendant.

ancestors() / statedAncestors()

Depending on which direct setting is used, retrieves all concepts that appear in this concept's [stated]parentIds and/or [stated]AncestorIds array. Results are added to property ancestors or statedAncestors, based on the option name used.

The following options are available:

  • direct: true | false (required)

Controls whether only direct ancestors should be collected or a transitive closure of concept supertypes.

When set to true, property [stated]parentIds will be used only for concept retrieval, otherwise the union of [stated]parentIds and [stated]AncestorIds are collected (the special placeholder value "-1" is ignored). The presence or absence of the "stated" prefix in the search field depends on the option name.

  • limit: 0

Collects the number of ancestors in an efficient manner, and sets the total property of the returned collection resource without including any concepts in it. Not used when any other value is given (however, this property expansion supports cases where multiple concepts' ancestors need to be returned).

  • expand(...)

Allows nested expansion of concept properties on each collected ancestor.

Operations

Read concept (GET)

A GET request that includes a concept identifier as its last path parameter will return information about the concept in question:

Query parameters

  • expand={options}

  • field={field1}[,{fieldN}]*

Restricts the set of fields returned from the index. Results in a smaller response object when only specific information is needed.

Supported names for field selection are the following:

  • active

  • activeMemberOf

  • ancestors - controls the appearance of ancestorIds as well

  • definitionStatusId

  • doi

  • effectiveTime

  • exhaustive

  • iconId

  • id - always included in the response, even when not present as a field parameter

  • mapTargetComponentType

  • memberOf

  • moduleId

  • namespace

  • parents - controls the appearance of parentIds as well

  • preferredDescriptions

  • refSetType

  • referencedComponentType

  • released

  • score

  • semanticTags

  • statedAncestors - controls the appearance of statedAncestorIds as well

  • statedParents - controls the appearance of statedParentIds as well

  • created and revised - these fields are associated with revision control, and even though they are listed as supported fields, they do not appear in the response even when explicitly requested.

Specifying any other field name results in a 400 Bad Request response:

Fields with a value of null do not appear in the response, even if they are selected for inclusion.

Request headers

  • Accept-Language: {language-range}[;q={weight}](, {language-range}[;q={weight}])*

Specifying an unknown language or dialect results in a 400 Bad Request response:

Find concepts (GET)

A GET request that ends with concepts as its last path parameter will search for concepts matching all of the constraints supplied as query parameters. By default (when no query parameter is added) it returns all concepts.

The response consists of a collection of concept resources, a searchAfter key (described in section "Query parameters" below), the limit used when computing response items and the total hit count:

Query parameters

  • definitionStatus={eclExpression} | {id1}[,{idN}]*

An ECL expression or enumerated list that describes the allowed set of SCTIDs that must appear in matching concepts' definitionStatusId property. Since there are only two values used, 900000000000074008|Primitive| and 900000000000073002|Defined| for primitive and fully defined concepts, respectively, a single SCTID is usually entered here.

  • ecl={eclExpression}

As ECL syntax uses special symbols, query parameters should be encoded to URL-safe characters. The examples in this section are using the cleartext form for better readability.

  • statedEcl={eclExpression}

  • semanticTag={tag1}[,{tagN}]*

Filters concepts by a comma-separated list of allowed hierarchy tags. Matching concepts can have any of the supplied tags present (at least one) on their Fully Specified Names.

  • term={searchTerm}

Matching concepts must have an active description whose term matches the string specified here. The search is executed in "smart match" mode; the following examples show which search expresssions match which description terms:

  • descriptionType={eclExpression} | {id1}[,{idN}]*

Restricts the result set by description type; matches must have at least one active description whose typeId property is included in the evaluated ECL result set or SCTID list. It is typically used in combination with term (see above) to control which type of descriptions should be matched by term.

  • parent={id1}[,{idN}]*

  • statedParent={id1}[,{idN}]*

  • ancestor={id1}[,{idN}]*

  • statedAncestor={id1}[,{idN}]*

Filters concepts by hierarchy. All four query parameters accept a comma-separated list of SCTIDs; the result set will contain direct descendants of the specified values in the case of parent and statedParent, and a transitive closure of descendants for ancestor and statedAncestor (including direct children). Parameters starting with stated... will use the stated IS A hierarchy for computations.

  • doi=true | false

Controls whether relevance-based sorting should take Degree of Interest (DOI for short) into account. When enabled, concepts that are used frequently in a clinical environment are favored over concepts with a lower likelihood of use.

  • namespace={namespaceIdentifier}

  • namespaceConceptId={id1}[,{idN}]*

  • isActiveMemberOf={eclExpression} | {id1}[,{idN}]*

This filter accepts either a single ECL expression, or a comma-separated list of reference set SCTIDs. For each matching concept at least one active reference set member must exist where the referenceComponentId points to the concept and the referenceSetId property is listed in the filter, or is a member of the evaluated ECL expression's result set.

  • effectiveTime={yyyyMMdd} | Unpublished

Filters concepts by effective time. The query parameter accepts a single effective time in yyyyMMdd (short) format, or the literal Unpublished when searching for concepts that have been modified since they were last published as part of a code system version.

Note that only the concept's effective time is taken into account, not any of its related core components (descriptions, relationships) or reference set members. If the concept's status, definition status or module did not change since the last release, its effective time will not change either.

When searching for Unpublished concepts, the effectiveTime property will not appear on returned concept resources, as the value is null for all unpublished components.

  • active=true | false

Filters concepts by status. When set to true, only active concepts are added to the resulting collection, while a value of false collects inactive concepts only. (If both active and inactive concepts should be returned, do not add this parameter to the query.)

  • module={eclExpression} | {id1}[,{idN}]*

Filters concepts by moduleId. The query parameter accepts either a single ECL expression, or a comma-separated list of module SCTIDs; concepts must have a moduleId property that is included in the ID list or the evaluated ECL result.

  • id={id1}[,{idN}]*

Filters concepts by SCTID. The parameter accepts a comma-separated list of IDs; matching concepts must have an id property that matches any of the specified identifiers.

  • sort: "{field}(:{asc | desc})?"(, "{field}(:{asc | desc})")*

Sorts returned concept resources based on the sort configuration given in this parameter. Field names and sort order must be separated by a colon (:) character. When no sort order is given, ascending order (asc) is assumed.

The default behavior is to sort results by id, in ascending order. SCTIDs are sorted lexicographically, not as numbers; this means that eg. 10683591000119104 will appear before 10724008, as their first two digits are the same, but the third digit is smaller in the former identifier.

  • limit={limit}

Controls the maximum number of items that should be returned in the collection. When not specified, the default limit is 50 items.

  • searchAfter={searchAfter}

Supports keyset pagination, ie. retrieving the next page of items based on the response for the current page. To use, set limit to the number of items expected on a single page, then run the first search request without setting a searchAfter key. The returned response will include the value to be inserted into the next request:

The process can be repeated until the items array turns up empty, indicating that there are no more pages to return.

  • expand={options}

  • field={field1}[,{fieldN}]*

Request headers

  • Accept-Language: {language-range}[;q={weight}](, {language-range}[;q={weight}])*

Find concepts (POST)

POST requests submitted to concepts/search perform the same search operation as described for the GET request above, but each query parameter is replaced by a property in the JSON request body:

Request headers

  • Accept-Language: {language-range}[;q={weight}](, {language-range}[;q={weight}])*

Create concept (POST)

POST requests submitted to concepts create a new concept with the specified parameters, then commit the result to the terminology repository.

The resource path typically consists of a single code system identifier for these requests, indicating that changes should go directly to the working branch of the code system, or a direct child of the working branch for isolating a set of changes that can be reviewed and merged in a single request.

The request body needs to conform to the following requirements:

  • include at least one Fully Specified Name (FSN)

  • include at least one preferred synonym (Preferred Term, PT)

The SCTID of created components can be specified in two ways:

  1. Explicitly by setting the id property on the component object; the request fails when an existing component in the repository already has the same SCTID assigned to it;

  2. Allowing the server to generate an identifier by leaving id unset and populating namespaceId with the expected namespace identifier, eg. "1000154". Requests using namespaceId should not fail due to an SCTID collision, as generated identifiers are checked for uniqueness.

When a namespaceId is set on the concept level, descriptions and relationships will use this value by default, so in this case neither id nor namespaceId needs to be set on them. The same holds true for moduleId – the concept's module identifier is applied to all related descriptions, relationships and reference set members in the request, unless it is set to a different value on the component object.

Please see the example below for required properties. (Note that it is non-executable in its current form, as the OWL axiom reference set member can not be created without knowing the concept's SCTID in advance.)

A successful commit will result in a 201 Created response; the response header Location can be used to extract the generated concept identifier. Validation errors in the request body cause a 400 Bad Request response.

Request headers

  • X-Author: {author_id}

Changes the author recorded in the commit message from the authenticated user (default) to the specified user.

Update concept (PUT)

The following properties can be updated on any component. If they are not included in the request, the corresponding component property remains unchanged.

  • moduleId: string

  • active: boolean

  • effectiveTime: string (in YYYYmmdd, "short" format)

Specifying an empty string for inactivationIndicatorId will remove an existing indicator, while an empty array will delete historical association reference set members for the concept. This is handled automatically when the concept is re-activated, so inactivationProperties can be omitted from such requests entirely:

Properties that can be updated on the concept itself are:

  • definitionStatusId: string

  • subclassDefinitionStatus: "DISJOINT_SUBCLASSES" | "NON_DISJOINT_SUBCLASSES"

In addition to the above, core components and reference set members related to the concept in question can be updated in a single request by including any of the following properties:

  • descriptions

  • relationships

  • members

If a collection resource property is not included in the update request, the corresponding component type is unchanged. An empty array attempts to delete all existing related components. Otherwise, the components included in the collection are compared by SCTID/UUID to existing components, and it is decided whether:

  • a new component should be created (if the identifier did not appear previously in the terminology store)

  • an existing component should be updated (if the identifier existed previously in the terminology store)

  • an existing component should be deleted (if the identifier does not exist in the request, but existed previously in the terminology store)

Successful updates return 204 No Content from the server. Updates that attempt to modify the state of a missing or deleted concept result in a 404 Not Found response.

Query parameters

  • force=true | false

Specifies whether updating the effective time of the concept should be allowed. The default value is false; in such cases, supplying an effective time property for the update is disallowed. The component's effective time after an update is computed automatically at all times, when the force property is set to true, this can be overridden externally.

Request headers

  • X-Author: {author_id}

Changes the author recorded in the commit message from the authenticated user (default) to the specified user.

Delete concept (DELETE)

DELETE requests sent to a URI where the last path parameter is an existing concept ID will remove the concept and all of its associated components (descriptions, relationships, reference set members referring to the concept) from the terminology repository.

Deletes are acknowledged with a 204 No Content response on success. Deletion can be verified by trying to retrieve concept information from the same resource path – a 404 Not Found should be returned in this case.

Note that resource branches maintain content in isolation, and so deleting a concept on eg. a task branch will not remove the concept from the code system's working branch, until work on the task branch is approved and merged into mainline.

Query parameters

  • force=true | false

Specifies whether deletion of the concept should be allowed, if it has components that were already part of an RF2 release (or code system version). This is indicated by the released property on each component.

The default value is false; with the option disabled, attempting to delete a released component results in a 409 Conflict response:

Only administrators should set this parameter to true. It is advised to delete redundant or erroneous components before they are put in circulation as part of a SNOMED CT RF2 release. In other cases, inactivation should be preferred over removal.

Request headers

  • X-Author: {author_id}

Changes the author recorded in the commit message from the authenticated user (default) to the specified user.

Reference Sets

Two categories make up Snow Owl's Reference Set API:

  1. Reference Sets category to get, search, create and modify reference sets

  2. Reference Set Members category to get, search, create and modify reference set members

Basic operations like create, update, delete are supported for both category.

Actions API

On top of the basic operations, reference sets and members support actions. Actions have an action property to specify which action to execute, the rest of the JSON properties will be used as body for the Action.

Supported reference set actions are:

  1. sync - synchronize all members of a query type reference set by executing their query and comparing the results with the current members of their referenced target reference set

Supported reference set member actions are:

  1. create - create a reference set member (uses the same body as POST /members)

  2. update - update a reference set member (uses the same body as PUT /members)

  3. delete - delete a reference set member

  4. sync - synchronize a single member by executing the query and comparing the results with the current members of the referenced target reference set

For example the following will sync a query type reference set member's referenced component with the result of the reevaluated member's ESCG query

Bulk API

Members list of a single reference set can be modified by using the following bulk-like update endpoint:

Input

The request body should contain the commitComment property and a request array. The request array must contain actions (see Actions API) that are enabled for the given set of reference set members. Member create actions can omit the referenceSetId parameter, those will use the one defined as path parameter in the URL. For example by using this endpoint you can create, update and delete members of a reference set at once in one single commit.

Compare

Compare API

Comparison for current terminology changes committed to a source or target branch can be conducted by creating a compare resource.

A review identifier can be added to merge requests as an optional property. If the source or target branch state is different from the values captured when creating the review, the merge/rebase attempt will be rejected. This can happen, for example, when additional commits are added to the source or the target branch while a review is in progress; the review resource state becomes STALE in such cases.

Reviews and concept change sets have a limited lifetime. CURRENT reviews are kept for 15 minutes, while review objects in any other states are valid for 5 minutes by default. The values can be changed in the server's configuration file.

Compare two branches

Response

Read component state from comparison

Terminology components (and in fact any content) can be read from any point in time by using the special path expression: {branch}@{timestamp}. To get the state of a SNOMED CT Concept from the previous comparison on the compareBranch at the returned compareHeadTimestamp, you can use the following request:

Request

Response

To get the state of the same SNOMED CT Concept but on the base branch, you can use the following request:

Request

Response

Additionally, if required to compute what's changed on the component since the creation of the task, it is possible to get back the base version of the changed component by using another special path expression: {branch}^.

Request

Response

These characters are not URL safe characters, thus they must be encoded before sending the HTTP request.

Branching

Snow Owl provides branching support for terminology repositories. In each repository there is an always existing and UP_TO_DATE branch called MAIN. The MAIN branch represents the latest working version of your terminology (similar to a master branch on GitHub).

You can create your own branches and create/edit/delete components and other resources on them. Branches are identified with their full path, which should always start with MAIN. For example the branch MAIN/a/b/c/d represents a branch under the parent MAIN/a/b/c with name d.

Later you can decide to either delete the branch or merge the branch back to its parent. To properly merge a branch back into its parent, sometimes it is required to rebase (synchronize) it first with its parent to get the latest changes. This can be decided via the state attribute of the branch, which represents the current state compared to its parent state.

Branch states

There are five different branch states available:

  1. UP_TO_DATE - the branch is up-to-date with its parent there are no changes neither on the branch or on its parent

  2. FORWARD - the branch has at least one commit while the parent is still unchanged. Merging a branch requires this state, otherwise it will return a HTTP 409 Conflict.

  3. BEHIND - the parent of the branch has at least one commit while the branch is still unchanged. The branch can be safely rebased with its parent.

  4. DIVERGED - both parent and branch have at least one commit. The branch must be rebased first before it can be safely merged back to its parent.

  5. STALE - the branch is no longer in relation with its former parent, and should be deleted.

Snow Owl supports merging of unrelated (STALE) branches. So branch MAIN/a can be merged into MAIN/b, there does not have to be a direct parent-child relationship between the two branches.

Basics

Get a branch

Response

Get all branches

Response

Create a branch

Input

Response

Delete a branch

Response

Merging

Perform a merge

Input

Response

Perform a rebase

Input

Response

Monitor progress of a merge or rebase

Response

Remove merge or rebase queue item

Response

Introduction

POST requests that create a resource will return a Location response header that holds the URL of the created resource. It is highly recommended that API clients use these. Doing so will make future upgrades of the API easier for developers. All URLs are expected to be proper URI templates.

To troubleshoot these please examine the log files at {SERVER_HOME}/serviceability/logs/log.log and/or .

Swagger documentation is available on your Snow Owl instance at .

SNOMED CT code system URLs follow the conventions described in the .

ICD-10

ICD-10 URLs are typically following the naming convention described in HL7's : http://hl7.org/fhir/sid/icd-10-[x], where the -[x] suffix is only included if it is a national variant. One exception is the German Modification where the publisher uses a different value.

LOINC

Local Code Systems (LCS)

Each concept is associated with human-readable descriptions that help users select the SCTID appropriate for their use case, as well as relationships that form links between other concepts in the terminology, further clarifying their intended meaning. The API for manipulating the latter two types of components are covered in sections and , respectively.

The resource includes all RF2 properties that are defined in SNOMED International's :

A descriptive key for the concept's icon. The icon identifier typically corresponds to the lowercase, underscore-separated form of the contained in each concept's Fully Specified Name (or FSN for short). The following keys are currently expected to appear in responses (subject to change):

Currently unsupported. Indicates whether a parent concept's direct descendants form a in OWL 2 terms; when set to DISJOINT_SUBCLASSES, child concepts are assumed to be pairwise disjoint and together cover all possible cases of the parent concept.

Expands reference set metadata and content, available on .

Note that the response object for property referenceSet can also be retrieved directly using the .

Reference set members can also be fetched via the .

Reference set members can also be fetched in a "standalone" fashion via the .

Allowed reference set type constants are (these are described in the section of SNOMED International's "Reference Sets Practical Guide" and the section of "Release File Specification" in more detail):

Expands the (PT for short) and the (FSN for short) of the concept, respectively.

These descriptions are language context-dependent; the use of certain descriptions can be preferred in one dialect and acceptable or discouraged in others. The final output is controlled by the request header, which clients can use to supply a list of locales in order of preference.

Expands all descriptions associated with the concept, and adds them to a collection resource (that includes an element limit and a total hit count) under the property descriptions. These can also be retrieved separately by the use of the .

An ECL expression that restricts the typeId property of each returned description. The simplest expression is a single SCTID, eg. when this option has a value of "900000000000013009", only will be expanded.

Retrieves all "outbound" relationships, where the sourceId property matches the SCTID of the concept(s), adding them to a property named relationships as a collection resource object. The same set of relationships can also be retrieved in standalone form via Snow Owl's .

The same set of options are supported within inboundRelationships as in relationships (see ), with three important differences:

Concept properties that should be returned along with the original request, as part of the concept resource. See available options in section above.

Controls the logic behind Preferred Term and Fully Specified Name selection for the concept. See the documentation for expand options for details.

Restricts the returned set of concepts to those that match the specified ECL expression. The query parameter can be used on its own for evaluation of expressions, or in combination with other query parameters. Expressions conforming to the short form of ECL 1.5 syntax are accepted. The expression is evaluated over the , based on the currently persisted inferred relationships.

Same as ecl, but the input expression is evaluated over the by using stated relationships (if present) and OWL axioms for evaluation.

The SCTID of matching concepts must have the specified 7-digit , eg. 1000154. When matching by namespace concept ID, a comma-separated list of SCTIDs are expected, and the associated 7-digit identifier will be extracted from the active FSNs of each concept entered here.

Field names supported for sorting are the same that are used for field selection; please see for the complete list.

searchAfter keys should be considered opaque; they can not be constructed to jump to an arbitrary point in the enumeration. Keyset pagination also doesn't handle cases gracefully where eg. concepts with "smaller" SCTIDs are inserted while pages are retrieved from the server. If a consistent result set is expected, a path parameter should be used in consecutive search requests.

Concept properties that should be returned along with the original request, as part of the concept resource. See available options in section above.

Restricts the set of fields returned from the index. Results in a smaller response object when only specific information is needed. See for the list of supported field names.

Controls the logic behind Preferred Term and Fully Specified Name selection for returned concepts. See the documentation for expand options for details.

Controls the logic behind Preferred Term and Fully Specified Name selection for returned concepts. See the documentation for expand options for details.

PUT requests to locations that identify a concept resource (same as when concept content) will update the concept. Following a successful commit, the state of the concept on the branch should match the state received in the request body.

When a concept, an object named inactivationProperties can be added that can point to possible replacement concepts and/or specify the reason for inactivation:

Each of the above can hold a collection resource of the respective component resource type. These resources are described in detail in sections , and , respectively.

RFC 6570
raise an issue on GitHub
/snowowl/snomedct
Branching
Compare
Concepts
RefSets
SNOMED CT URI Standard
FHIR specification
{
  "id": "138875005",
  "released": true,
  "active": true,
  "effectiveTime": "20020131",
  "moduleId": "900000000000207008",
  "iconId": "snomed_rt_ctv3",
  "definitionStatus": {
    "id": "900000000000074008"
  },
  "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES",
  "ancestorIds": [],
  "parentIds": [
    "-1"
  ],
  "statedAncestorIds": [],
  "statedParentIds": [
    "-1"
  ],
  "definitionStatusId": "900000000000074008"
}
GET /snomedct/MAIN/concepts/425758004 // Diagnostic blood test
{
  [...]
  "ancestorIds": [
    "-1",        // Special value for taxonomy root
    "15220000",  // Laboratory test
    "71388002",  // Procedure
    "108252007", // Laboratory procedure (not pictured below)
    "128927009", // Procedure by method
    "138875005", // SNOMED CT Concept
    "362961001", // Procedure by intent
    "386053000"  // Evaluation procedure
  ],
  "parentIds": [
    "103693007", // Diagnostic procedure
    "396550006"  // Blood test
  ],
  [...]
}
GET /snomedct/MAIN/concepts/900000000000497000?expand=referenceSet() // CTV3 simple map
{
  "id": "900000000000497000",
  "active": true,
  [...]
  "referenceSet": {
    "id": "900000000000497000",
    "released": true,
    "active": true,
    "effectiveTime": "20020131",
    "moduleId": "900000000000012004",
    "iconId": "900000000000496009",
    "type": "SIMPLE_MAP",                    // Reference set type
    "referencedComponentType": "concept",    // Referenced component type
    "mapTargetComponentType": "__UNKNOWN__"  // Map target component type
                                             // (applicable to map type reference sets only)
  },
  [...]
}
GET /snomedct/MAIN/concepts/900000000000497000?expand=referenceSet(expand(members()))
{
  "id": "900000000000497000",
  [...]
  "referenceSet": {
    [...]
    "type": "SIMPLE_MAP",
    "referencedComponentType": "concept",
    "mapTargetComponentType": "__UNKNOWN__",
    "members": {
      "items": [
        {
          "id": "00000193-e889-4d3f-b07f-e0f45eb77940",
          "released": true,
          "active": true,
          "effectiveTime": "20190131",
          "moduleId": "900000000000207008",
          "iconId": "776792002",
          "referencedComponent": {
            "id": "776792002"
          },
          "refsetId": "900000000000497000", // Reference set ID matches the identifier concept's ID
                                            // for all members of the reference set
          "referencedComponentId": "776792002",
          "mapTarget": "XV8E7"
        },
        [...]
      ],
      "searchAfter": "AoE_BTAwMDcyYWIzLWM5NDgtNTVhYy04MTBkLTlhOGNhMmU5YjQ5Yg==",
      "limit": 50,
      "total": 481508
    }
  },
}
GET /snomedct/MAIN/2011-07-31/concepts/86299006?expand=preferredDescriptions()
{
  "id": "86299006", // Concept SCTID
  [...]
  "preferredDescriptions": {
    "items": [
      {
        "id": "828532012",                        // Description SCTID
        "term": "Tetralogy of Fallot (disorder)", // Description term
        "concept": {
          "id": "86299006"
        },
        "type": {
          "id": "900000000000003001"
        },
        "typeId": "900000000000003001",           // Type: Fully Specified Name
        "conceptId": "86299006",                  // "conceptId" matches the returned concept's SCTID
        "acceptability": {
          "900000000000509007": "PREFERRED",      // Acceptability in reference set "US English"
          "900000000000508004": "PREFERRED"       // Acceptability in reference set "GB English"
        }
      },
      {
        "id": "143123019",
        "term": "Tetralogy of Fallot",
        "concept": {
          "id": "86299006"
        },
        "type": {
          "id": "900000000000013009"
        },
        "typeId": "900000000000013009",           // Type: Synonym
        "conceptId": "86299006",
        "acceptability": {
          "900000000000509007": "PREFERRED",
          "900000000000508004": "PREFERRED"
        }
      }
    ],
    "limit": 2,
    "total": 2
  },
  [...]
}
GET /snomedct/MAIN/concepts/103981000119101?expand=preferredDescriptions(),semanticTags()
{
  "id": "103981000119101",
  "released": true,
  "active": true,
  "effectiveTime": "20200131",
  "preferredDescriptions": {
    "items": [
      {
        "id": "3781804016",
        "term": "Proliferative retinopathy following surgery due to diabetes mellitus (disorder)",
        [...]
      },
      [...]
    ]
  }
  [...]
  "semanticTags": [ "disorder" ], // Extracted from the Fully Specified Name; see term above
  [...]
}
GET /snomedct/MAIN/concepts/99999003?expand=inactivationProperties()
{
  "id": "99999003",
  "active": false,
  "effectiveTime": "20090731",
  [...]
  "inactivationProperties": {
    "inactivationIndicator": {
      "id": "900000000000487009"
    },
    "associationTargets": [
      {
        "referenceSet": {
          "id": "900000000000524003"
        },
        "targetComponent": {
          "id": "416516009"
        },
        "referenceSetId": "900000000000524003",     // MOVED TO association reference set
        "targetComponentId": "416516009"            // Extension Namespace 1000009
      }
    ],
    "inactivationIndicatorId": "900000000000487009" // Moved elsewhere
  },
  [...]
}
GET /snomedct/MAIN/concepts/99999003?expand=members()
{
  "id": "99999003",
  [...]
  "members": {
    "items": [
      {
        "id": "f2b12ff9-794a-5a05-8027-88f0492f3766",
        "released": true,
        "active": true,
        "effectiveTime": "20020131",
        "moduleId": "900000000000207008",
        "iconId": "99999003",
        "referencedComponent": {
          "id": "99999003"
        },
        "refsetId": "900000000000497000",    // CTV3 simple map
        "referencedComponentId": "99999003", // all referencedComponentIds match the concept's SCTID
        "mapTarget": "XUPhG"                 // additional properties are displayed depending on the
                                             // reference set type
      },
      {
        "id": "5e9787df-11af-54ed-ae92-0ea3bc83f2ac",
        "released": true,
        "active": true,
        "effectiveTime": "20090731",
        "moduleId": "900000000000207008",
        "iconId": "99999003",
        "referencedComponent": {
          "id": "99999003"
        },
        "refsetId": "900000000000524003",    // MOVED TO association reference set
        "referencedComponentId": "99999003",
        "targetComponentId": "416516009"     // Extension Namespace 1000009
      },
      {
        "id": "9ffd949a-27d0-5811-ad48-47ff43e1bded",
        "released": true,
        "active": true,
        "effectiveTime": "20090731",
        "moduleId": "900000000000207008",
        "iconId": "99999003",
        "referencedComponent": {
          "id": "99999003"
        },
        "refsetId": "900000000000489007",    // Concept inactivation indicator reference set
        "referencedComponentId": "99999003",
        "valueId": "900000000000487009"      // Moved elsewhere
      }
    ],
    "limit": 3,
    "total": 3
  },
  [...]
}
GET /snomedct/MAIN/concepts/99999003?expand=members(active:true, refSetType:["ASSOCIATION","ATTRIBUTE_VALUE"])
{
  "id": "99999003",
  [...]
  "members": {
    [
      {
        "id": "5e9787df-11af-54ed-ae92-0ea3bc83f2ac",
        "released": true,
        "active": true,
        "effectiveTime": "20090731",
        "moduleId": "900000000000207008",
        "iconId": "99999003",
        "referencedComponent": {
          "id": "99999003"
        },
        "refsetId": "900000000000524003",    // MOVED TO association reference set
        "referencedComponentId": "99999003",
        "targetComponentId": "416516009"     // Extension Namespace 1000009
      },
      {
        "id": "9ffd949a-27d0-5811-ad48-47ff43e1bded",
        "released": true,
        "active": true,
        "effectiveTime": "20090731",
        "moduleId": "900000000000207008",
        "iconId": "99999003",
        "referencedComponent": {
          "id": "99999003"
        },
        "refsetId": "900000000000489007",    // Concept inactivation indicator reference set
        "referencedComponentId": "99999003",
        "valueId": "900000000000487009"      // Moved elsewhere
      }
    ],
    "limit": 2,
    "total": 2
  },
  [...]
}
GET /snomedct/MAIN/concepts/138875005?expand=module()
{
  "id": "138875005",
  "active": true,
  [...]
  // The moduleId of the requested concept
  "moduleId": "900000000000207008",
  "module": {                   // Expanded module concept resource
    "id": "900000000000207008", // SCTID matches 138875005's moduleId
    "released": true,
    "active": true,
    "effectiveTime": "20020131",
    // The moduleId of the module concept
    "moduleId": "900000000000012004",
    "iconId": "900000000000445007",
    "definitionStatus": {
      "id": "900000000000074008"
    },
    "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES",
    "ancestorIds": [...],
    [...]
    "definitionStatusId": "900000000000074008"
  },
  [...]
  "definitionStatusId": "900000000000074008"
}
GET /snomedct/MAIN/concepts/138875005?expand=definitionStatus()
{
  "id": "138875005",
  "active": true,
  // The definitionStatusId of the requested concept
  "definitionStatusId": "900000000000074008",
  "definitionStatus": {         // Expanded definition status concept resource
    "id": "900000000000074008", // SCTID matches 138875005's definitionStatusId
    "active": true,
    "effectiveTime": "20020131",
    [...]
    // The definitionStatusId of the definition status concept
    "definitionStatusId": "900000000000074008"
  },
  [...]
}
GET /codesystems/SNOMEDCT-UK-CL
{
  "id": "SNOMEDCT-UK-CL",
  "title": "SNOMED CT UK Clinical Extension",
  [...]
  "settings": {
    "languages": [
      {
        "languageTag": "en",   // the language tag
        "languageRefSetIds": [ // the corresponding language reference sets, in order of preference
          "900000000000509007",
          "900000000000508004",
          "999001261000000100",
          "999000691000001104"
        ]
      },
      {
        "languageTag": "en-us",
        "languageRefSetIds": [
          "900000000000509007"
        ]
      },
      {
        "languageTag": "en-gb",
        "languageRefSetIds": [
          "900000000000508004",
          "999001261000000100",
          "999000691000001104"
        ]
      },
      {
        "languageTag": "en-nhs-pharmacy",
        "languageRefSetIds": [
          "999000691000001104"
        ]
      },
      {
        "languageTag": "en-nhs-clinical",
        "languageRefSetIds": [
          "999001261000000100"
        ]
      }
    ],
    [...]
  },
  [...]
}
GET /snomedct/MAIN/concepts/703247007?expand=pt()
// Accept-Language: en-US
{
  "id": "703247007",
  [...]
  "pt": {
    "id": "3007370016",
    "term": "Color",
    [...]
    "conceptId": "703247007", // conceptId matches the concept's SCTID
    "acceptability": {
      // Use of "Color" is preferred in the US English language reference set,
      // but not acceptable in others
      "900000000000509007": "PREFERRED"
    }
  },
  [...]
}
GET /snomedct/MAIN/concepts/703247007?expand=pt()
// Accept-Language: en-x-900000000000508004
{
  "id": "703247007",
  [...]
  "pt": {
    "id": "3007469016",
    "term": "Colour",
    [...]
    "conceptId": "703247007",
    "acceptability": {
      // Use of "Colour" is preferred in the GB English language reference set,
      // but not acceptable in others
      "900000000000508004": "PREFERRED"
    }
  },
  [...]
}
GET /snomedct/MAIN/concepts/86299006?expand=descriptions(active: true, sort: "term.exact:asc")
{
  "id": "86299006",
  [...]
  "descriptions": {
    "items": [
      {
        "id": "1235125018",
        "released": true,
        "active": true,
        "effectiveTime": "20070731",
        "moduleId": "900000000000207008",
        "iconId": "900000000000013009",
        "term": "Fallot's tetralogy",   // Descriptions are sorted by term (case insensitive)
        "semanticTag": "",
        "languageCode": "en",
        "caseSignificance": {
          "id": "900000000000017005"
        },
        "concept": {
          "id": "86299006"
        },
        "type": {
          "id": "900000000000013009"
        },
        "typeId": "900000000000013009", // Synonym
        "conceptId": "86299006",        // conceptId property matches the concept's SCTID
        "caseSignificanceId": "900000000000017005",
        "acceptability": {
          "900000000000509007": "ACCEPTABLE",
          "900000000000508004": "ACCEPTABLE"
        }
      },
      {
        "id": "143125014",
        "active": true,
        "term": "Subpulmonic stenosis, ventricular septal defect, overriding aorta, AND right ventricular hypertrophy",
        [...]
      },
      {
        "id": "143123019",
        "active": true,
        "term": "Tetralogy of Fallot",
        [...]
      },
      {
        "id": "828532012",
        "active": true,
        "term": "Tetralogy of Fallot (disorder)",
        "typeId": "900000000000003001", // Fully Specified Name
        [...]
      },
      {
        "id": "1235124019",
        "active": true,
        "term": "TOF - Tetralogy of Fallot",
        [...]
      }
    ],
    "limit": 5,
    "total": 5
  },
  [...]
}
GET /snomedct/MAIN/concepts/404684003?expand=relationships(active: true)
{
  "id": "404684003", // Clinical finding
  "active": true,
  [...]
  "relationships": {
    "items": [
      {
        "id": "2472459022",
        "released": true,
        "active": true,
        "effectiveTime": "20040131",
        "moduleId": "900000000000207008",
        "iconId": "116680003",
        "destinationNegated": false,
        "relationshipGroup": 0,
        "unionGroup": 0,
        "characteristicType": {
          "id": "900000000000011006"
        },
        "modifier": {
          "id": "900000000000451002"
        },
        "source": {
          "id": "404684003"
        },
        "type": {
          "id": "116680003"
        },
        "destination": {
          "id": "138875005"
        },
        "typeId": "116680003",
        "modifierId": "900000000000451002",
        "sourceId": "404684003", // sourceId property matches concept's SCTID
        "destinationId": "138875005",
        "characteristicTypeId": "900000000000011006"
      }
    ],
    "limit": 1,
    "total": 1
  },
  [...]
}
GET /snomedct/MAIN/concepts/138875005?expand=descendants(direct: true)
{
  "id": "138875005", // SNOMED CT Concept
  "active": true,
  [...]
  "descendants": {
    "items": [
      {
        "id": "105590001", // Substance
        "released": true,
        "active": true,
        "effectiveTime": "20020131",
        "moduleId": "900000000000207008",
        "iconId": "substance",
        "definitionStatus": {
          "id": "900000000000074008"
        },
        "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES",
        "ancestorIds": [
          "-1"
        ],
        "parentIds": [
          "138875005" // parentIds contains SNOMED CT Concept's SCTID, meaning this concept
                      // is a direct (inferred) descendant of it
        ],
        "statedAncestorIds": [
          "-1"
        ],
        "statedParentIds": [
          "138875005"
        ],
        "definitionStatusId": "900000000000074008"
      },
      [...]
    ],
    "limit": 50,
    "total": 19 // Total number of descendants
  },
  [...]
}
GET /snomedct/MAIN/2019-07-31/concepts/138875005
GET /snomedct/MAIN/2019-07-31/concepts/138875005?field=xyz
{
  "status": 400,
  "code": 0,
  "message": "Unrecognized concept model property '[xyz]'.",
  "developerMessage": "Supported properties are '[active, activeMemberOf, ancestors, ...]'.",
  "errorCode": 0,
  "statusCode": 400
}
GET /snomedct/MAIN/2019-07-31/concepts/138875005?field=id,active,score
{
  "id": "138875005",
  "active": true
  // score was not calculated, and so is not present
}
GET /snomedct/MAIN/2019-07-31/concepts/138875005?expand=fsn()
// Accept-Language: hu-HU
{
  "status": 400,
  "code": 0,
  "message": "Don't know how to convert extended locale [hu-hu] to a language reference set identifier.",
  "developerMessage": "Input representation syntax or validation errors. Check input values.",
  "errorCode": 0,
  "statusCode": 400
}
GET /snomedct/SNOMEDCT/2021-01-31/concepts
{
  "items": [
    {
      "id": "100000000", // Each item represents a concept resource
      "released": true,
      "active": false,
      "effectiveTime": "20090731",
      "moduleId": "900000000000207008",
      "iconId": "138875005",
      "definitionStatus": {
        "id": "900000000000074008"
      },
      "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES",
      "ancestorIds": [],
      "parentIds": [
        "-1"
      ],
      "statedAncestorIds": [],
      "statedParentIds": [
        "-1"
      ],
      "definitionStatusId": "900000000000074008"
    },
    [...] // at most 50 items are returned when no limit is specified
  ],
  "searchAfter": "AoEpMTAwMDQyMDAz", // key can be used for paged results
  "limit": 50,                       // the limit given in the original request
                                     // (or the default limit if not specified)
  "total": 481509                    // the total number of concept matches
}
GET /snomedct/SNOMEDCT/2021-01-31/concepts?ecl=<<404684003|Clinical finding|:363698007|Finding site|=40238009|Hand joint structure|
{
  "items": [
    [...]
    {
      "id": "129157005",
      "active": true,
      [...]
      "pt": {
        "id": "2664900016",
        "term": "Traumatic dislocation of joint of hand", // Concept match based on ECL expression
        [...]
      },
      [...]
    },
    [...]
  ],
  "searchAfter": "AoEpNDQ4NDUzMDA0",
  "limit": 50,
  "total": 58
}
Search term       → Term of matched description
-----------------   ---------------------------
"Ångström"          "angstrom"                  (case insensitive, ASCII-folding)
"sys blo pre"       "Systolic blood pressure"   (prefix of each word, matching order)
"broken arm"        "Fracture of arm"           (synonym filter, ignored stopwords)
"greenstick frac"   "Greenstick fracture"       (prefix match for final query keyword,
                                                exact match for all others)
GET /snomedct/SNOMEDCT/2021-01-31/concepts?parent=138875005&field=id
{
  "items": [
    // Inferred direct descendants of 138875005|Snomed CT Concept|
    { "id": "105590001" }, // Substance
    { "id": "123037004" }, // Body structure
    { "id": "123038009" }, // Specimen
    [...]
  ],
  "searchAfter": "AoEyOTAwMDAwMDAwMDAwNDQxMDAz",
  "limit": 50,
  "total": 19 // 19 top-level concepts returned in total
}
GET /snomedct/SNOMEDCT-UK-CL/concepts?namespaceConceptId=370138007&field=id
{
  "items": [
    // Concept IDs with a namespace identifier of "1000001", corresponding to
    // namespace concept 370138007|Extension Namespace {1000001}|
    {
      "id": "999000011000001104" // 99900001>>1000001<<104
    },
    [...]
  ],
  "searchAfter": "AoEyOTk5MDAwODcxMDAwMDAxMTAy",
  "limit": 50,
  "total": 4
}
GET /snomedct/SNOMEDCT/2021-01-31/concepts?effectiveTime=20170131&field=id,effectiveTime
{
  "items": [
    {
      "id": "10151000132103",
      "effectiveTime": "20170131" // Concept effective time matches query parameter
    },
    {
      "id": "10231000132102",
      "effectiveTime": "20170131"
    },
    [...]
  ],
  "searchAfter": "AoEwMTA3NTQ3MTAwMDExOTEwNw==",
  "limit": 50,
  "total": 5580 // Total number of concepts with effective time 2017-01-31
}
GET /snomedct/SNOMEDCT/2021-01-31/concepts?effectiveTime=20170131&field=id,effectiveTime
{
  "items": [
    {
      "id": "10151000132103",
      "effectiveTime": "20170131"
    },
    {
      "id": "10231000132102",
      "effectiveTime": "20170131"
    },
    [...]
  ],
  // Key to use in the request for the second page
  "searchAfter": "AoEwMTA3NTQ3MTAwMDExOTEwNw==",
  "limit": 50,
  "total": 5580
}

GET /snomedct/SNOMEDCT/2021-01-31/concepts?effectiveTime=20170131&field=id,effectiveTime&searchAfter=AoEwMTA3NTQ3MTAwMDExOTEwNw==
{
  "items": [
    // List continues from the last item of the previous request
    // (but the item itself is not included)
    {
      "id": "1075481000119105",
      "effectiveTime": "20170131"
    },
    {
      "id": "10759271000119104",
      "effectiveTime": "20170131"
    },
    [...]
  ],
  // Different key returned for the third page
  "searchAfter": "AoEwMTA4MTgxMTAwMDExOTEwNw==",
  "limit": 50,
  "total": 5580
}
POST /snomedct/SNOMEDCT/2021-01-31/concepts/search
// Request body
{
  // Query parameters allowing multiple values must be passed as arrays
  "expand": [ "pt()" ],
  "field": [ "id", "preferredDescriptions" ],
  "limit": 100,
  "active": true,
  "module": [ "900000000000012004" ]
}

// Response
{
  "items": [
    {
      "id": "1003316002",
      "moduleId": "900000000000012004",
      "pt": {
        "id": "4167978019",
        "term": "Extension Namespace 1000256",
        [...]
      }
    },
    {
      "id": "1003317006",
      "moduleId": "900000000000012004",
      "pt": {
        "id": "4167981012",
        "term": "Extension Namespace 1000257",
        [...]
      }
    }
  ],
  "searchAfter": "AoEqMTAwMzMxNzAwNg==",
  "limit": 2,
  "total": 1802
}
// Create a concept on the working branch of code system SNOMEDCT-B2I
POST /snomedct/SNOMEDCT-B2I/concepts
// Request body
{
  "active": true,
  "moduleId": "636635721000154103", // SNOMED CT B2i extension module
  "namespaceId": "1000154",         // B2i Healthcare's namespace identifier
  "definitionStatusId": "900000000000074008", // Primitive
  "descriptions": [
    // Create mandatory FSN and PT
    {
      "active": true,
      // "moduleId", "namespaceId" will be set from the concept
      // "id" will be generated for the description
      // "conceptId" will be automatically populated with the new concept's SCTID
      "typeId": "900000000000003001", // Fully specified name
      "term": "Example concept (disorder)",
      "languageCode": "en",
      "caseSignificanceId": "900000000000448009", // Case insensitive
      "acceptability": {
        /*
           Acceptability map entries are keyed by language reference set ID.
           Allowed values are "PREFERRED" and "ACCEPTABLE".
        */
        "900000000000509007": "PREFERRED" // US English
      }
    },
    {
      "active": true,
      "typeId": "900000000000013009", // Synonym
      "term": "Example concept",
      "languageCode": "en",
      "caseSignificanceId": "900000000000448009", // Case insensitive
      "acceptability": {
        "900000000000509007": "PREFERRED" // US English
      }
    }
  ],
  "relationships": [
    /*
       Including relationships on a new concept request is optional.

       However, when no inferred IS A relationship is created, the concept will not
       be visible in the inferred hierarchy (and not show up in eg. ECL evaluations)
       until a classification is run on the branch, and suggested changes are saved.
    */
    {
      "active": true,
      // "moduleId", "namespaceId" will be set from the concept
      // "id" will be generated for the relationship
      // "sourceId" will be automatically populated with the new concept's SCTID
      "typeId": "116680003",       // IS A
      "destinationId": "64572001", // Disease
      "destinationNegated": false,
      "relationshipGroup": 0,
      "unionGroup": 0,
      "characteristicTypeId": "900000000000011006", // Inferred relationship
      "modifierId": "900000000000451002" // Some (existential restriction)
    }
  ],
  "members": [
    {
      // "id" is an UUID, will be automatically generated when not given
      "active": true,
      // "moduleId" needs to be set for reference set members; it is not propagated
      "moduleId": "636635721000154103",
      // "referencedComponentId" will be automatically populated with the new concept's SCTID
      "refsetId": "733073007",
      /*
         Additional properties of the reference set should be added here. For an OWL
         axiom reference set member, the property to the reference set type is called
         "owlExpression".

         At the moment we can not create an OWL axiom member for concepts that do not include
         a concept ID in advance.
      */
      "owlExpression": "SubClassOf(:<conceptId> :64572001)"
    }
  ],
  "commitComment": "Create new example concept"
}

// Response: 201 Created
// Location: /snomedct/SNOMEDCT-B2I/concepts/<SCTID of created concept>
PUT /snomed-ct/v3/SNOMEDCT/concepts/69949008
// Request body
{
  "active": false,
  "inactivationProperties": {
    "inactivationIndicatorId": "900000000000482003", // Duplicate
    "associationTargets": [
      { 
        "referenceSetId": "900000000000527005", // SAME AS association reference set
        "targetComponentId": "273999003"        // Neuroplasty
      }
    ]
  },
  "commitComment": "Inactivate duplicate concept"
}

// Response: 204 No Content
PUT /snomed-ct/v3/SNOMEDCT/concepts/69949008
// Request body
{
  "active": true,
  "commitComment": "Reactivate concept"
}
GET /snomed-ct/v3/SNOMEDCT/concepts/69949008
{
  "id": "69949008",
  "active": true,
  [...]
  "inactivationProperties": {
    "associationTargets": []
  },
  [...]
}

// Response: 204 No Content
DELETE /snomedct/SNOMEDCT/2021-01-31/concepts/138875005
{
  "status": 409,
  "code": 0,
  "message": "'concept' '138875005' cannot be deleted.",
  "developerMessage": "'concept' '138875005' cannot be deleted.",
  "errorCode": 0,
  "statusCode": 409
}
POST /members/:id/actions
{
  "commitComment": "Sync member's target reference set",
  "action": "sync"
}
PUT /:path/refsets/:id/members
{
  "commitComment": "Updating members of my simple type reference set",
  "requests": [
  	{
  	  "action": "create|update|delete|sync",
  	  "action-specific-props": ...
  	}
  ]
}
POST /compare 
{
  "baseBranch": "MAIN",
  "compareBranch": "MAIN/a",
  "limit": 100
}
Status: 200 OK
{
  "baseBranch": "MAIN",
  "compareBranch": "MAIN/a",
  "compareHeadTimestamp": 1567282434400,
  "newComponents": [],
  "changedComponents": ["138875005"],
  "deletedComponents": [],
  "totalNew": 0,
  "totalChanged": 1,
  "totalDeleted": 0
}
GET /snomedct/MAIN@1567282434400/concepts/138875005
Status: 200 OK
{
  "id": "138875005",
  ...
}
GET /snomedct/MAIN/concepts/138875005
Status: 200 OK
{
  "id": "138875005",
  ...
}
GET /snomedct/MAIN/a^/concepts/138875005
Status: 200 OK
{
  "id": "138875005",
  ...
}
GET /branches/:path
Status: 200 OK
{
  "name": "MAIN",
  "baseTimestamp": 1431957421204,
  "headTimestamp": 1431957421204,
  "deleted": false,
  "path": "MAIN",
  "state": "UP_TO_DATE"
}
GET /branches
Status: 200 OK
{
  "items": [
    {
      "name": "MAIN",
      "baseTimestamp": 1431957421204,
      "headTimestamp": 1431957421204,
      "deleted": false,
      "path": "MAIN",
      "state": "UP_TO_DATE"
    }
  ]
}
POST /branches
{
  "parent" : "MAIN",
  "name" : "branchName",
  "metadata": {}
}
Status: 201 Created
Location: http://localhost:8080/snowowl/snomedct/branches/MAIN/branchName
DELETE /branches/:path
Status: 204 No content
POST /merges
{
  "source" : "MAIN/branchName",
  "target" : "MAIN"
}
Status: 202 Accepted
Location: http://localhost:8080/snowowl/snomedct/merges/2f4d3b5b-3020-4e8e-b046-b8266967d7dc
POST /merges
{
  "source" : "MAIN",
  "target" : "MAIN/branchName"
}
Status: 202 Accepted
Location: http://localhost:8080/snowowl/snomedct/merges/c82c443d-f3f4-4409-9cdb-a744da336936
GET /merges/c82c443d-f3f4-4409-9cdb-a744da336936
{
  "id": "c82c443d-f3f4-4409-9cdb-a744da336936",
  "source": "MAIN",
  "target": "MAIN/branchName",
  "status": "COMPLETED",
  "scheduledDate": "2016-02-29T13:52:45Z",
  "startDate": "2016-02-29T13:52:45Z",
  "endDate": "2016-02-29T13:53:06Z"
}
DELETE /merges/c82c443d-f3f4-4409-9cdb-a744da336936
Status: 204 No content
Descriptions
Relationships
SNOMED CT Description API
SNOMED CT Relationship API
Release File Specification
hierarchy tag
disjoint union
identifier concepts
Reference Sets API
SNOMED CT Reference Set Member API
SNOMED CT Reference Set Member API
Reference Set Types
Reference Set Types
Preferred Term
Fully Specified Name
Accept-Language
Synonyms
inferred view
stated view
namespace identifier
inactivating
Descriptions
Relationships
Reference set members
above
Property expansion
pt() and fsn()
above
point-in-time
Property expansion
above
pt() and fsn()
pt() and fsn()
retrieving
Deploy Snow Owl in less than a minute
Snow Owl Terminology Server Architecture Diagram
international-structure
extension-structure
extension-from-scratch
extension-extends-another
international-structure
basic-edition-structure
extension-upgrade-available
extension-upgrade-start
extension-upgrade-regular-maintenance
extension-upgrade-complete
multi-extension
workflow-branch-authoring
prepare-for-release
release-extension
workflow-branch-authoring
API categories
Authentication settings
Output of a handcrafted concept suggestion request
This section is quite long. Use the mouse wheel to scroll!
Parents and ancestors
Configure SSH tunnel
Set up LDAP connection
Provide credentials for the LDAP connection
Browse LDAP users / groups
Create new LDAP entry
Select existing user entry as template
Configure user details
Set user credentials
Copy the user's DN
Add new attribute
Select attribute type uniqueMember
Add new member to role group
Delete user entry
Delete role group attribute
Change user credentials