Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Snow Owl requires Java 11 or newer version. Specifically as of this writing, it is recommended that you use JDK (Oracle of OpenJDK is preferred) version 11.0.2. Java installation varies from platform to platform so we won’t go into those details here. Oracle’s recommended installation documentation can be found on Oracle’s website. Suffice to say, before you install Snow Owl, please check your Java version first by running (and then install/upgrade accordingly if needed):
Once we have Java set up, we can then download and run Snow Owl. The binaries are available at the Releases pages. For each release, you have a choice among a zip or tar archive, a DEB or RPM package.
For simplicity, let's use a zip file.
Let's download the most recent Snow Owl release as follows:
Then extract it as follows:
It will then create a bunch of files and folders in your current directory. We then go into the bin directory as follows:
And now we are ready to start the instance:
If everything goes well with the installation, you should see a bunch of log messages that look like below:
Let’s start with a basic health check, which we can use to see how our instance is doing. We’ll be using curl
to do this but you can use any tool that allows you to make HTTP/REST calls. Let’s assume that we are still on the same node where we started Snow Owl on and open another command shell window.
We will be using Snow Owl's Core API to check its status. You can run the following command by clicking the "Copy" link on the right side and pasting it into a terminal.
And the response:
We can see the installed version along with available repositories, their overall health (eg. "snomed"
with health "GREEN"
), associated indices and status (eg. "snomed-relationship"
with status "GREEN"
).
Repository indices store content for any number of code systems that share the same data structure and API, in the case of "snomed"
the International Edition of SNOMED CT and its extensions.
Whenever we ask for repository status, we either get GREEN
, YELLOW
, or RED
and an optional diagnosis
message.
GREEN
- everything is good (repository is fully functional)
YELLOW
- some data or functionality is not available, or diagnostic operation is in progress (repository is partially functional)
RED
- diagnostic operation required in order to continue (repository is not functional)
Now that we have a code system, let's take a look at its content! We can list concepts using either the SNOMED CT API tailored to this tooling, or the FHIR API for a representation that is uniform across different kinds of code systems. For the sake of simplicity, we will use the former in this example.
To list all available concepts in a code system, use the following command (just as with importing, the second SNOMEDCT
in the request path represents the code system identifier):
The expected response is:
The concept list is empty, indicating that we haven't imported anything into Snow Owl - yet.
Snow Owl® is a highly scalable, open source terminology server and collaborative authoring platform. It allows you to store, search and author high volumes of terminology artifacts quickly and efficiently.
Here are a few use-cases that Snow Owl could be used for:
You work in the healthcare industry and are interested in using a terminology server for browsing, accessing and distributing components of various terminologies and classifications to third-party consumers. In this case, you can use Snow Owl to load the necessary terminologies and access them via FHIR and proprietary APIs.
You are responsible for maintaining and publishing new versions of a particular terminology. In this case, you can use Snow Owl to collaboratively access and author the terminology content and at the end of your release schedule publish it with confidence and zero errors.
You have an Electronic Health Record system and would like to capture, maintain and query clinical information in a structured and standardized manner. Your Snow Owl terminology server can integrate with your EHR server via standard APIs to provide the necessary access for both terminology binding and data processing and analytics.
In this tutorial, you will be guided through the process of getting Snow Owl up and running, taking a peek inside it, and performing basic operations like importing SNOMED CT RF2 content, searching, and modifying your data. At the end of this tutorial, you should have a good idea of what Snow Owl is, how it works, and hopefully be inspired to see how you can use it for your needs.
There are a few concepts that are core to Snow Owl. Understanding these concepts from the outset will tremendously help ease the learning process.
A terminology (also known as code system, classification and/or ontology) defines and encapsulates a set of terminology components (eg. set of codes with their meanings) and versions. A terminology is identified by a unique name and stored in a repository. Multiple code systems can exist in a single repository besides each other as long as their name is unique.
A terminology component is a basic element in a code system with actual clinical meaning or use. For example in SNOMED CT, the Concept, Description, Relationship and Reference Set Member are terminology components.
A version that refers to an important snapshot in time, consistent across many terminology components, also known as tag or label. It is often created when the state of the terminology is deemed to be ready to be published and distributed to downstream customers or for internal use. A version is identified by its version ID (or version tag) within a given code system.
A repository manages changes to a set of data over time in the form of revisions. Conceptually very similar to a source code repository (like a Git repository), but information stored in the repository must conform to a predefined schema (eg. SNOMED CT Concepts RF2 schema) as opposed to storing it in pure binary or textual format. This way a repository can support various full-text search functionalities, semantical queries and evaluations on the stored, revision-controlled terminology data.
A repository is identified by a name and this name is used to refer to the repository when performing create, read, update, delete and other operations against the revisions in it. Repositories organize revisions into branches and commits.
A revision is the basic unit of information stored in a repository about a terminology component or artifact. It contains two types of information:
one is the actual data that you care about, for example a single code from a code system with its meaining and properties.
the other is revision control information (aka revision metadata). Each revision is identified by a random Universally Unique IDentifier (UUID) that is assigned when performing a commit in the repository. Also, during a commit each revision is associated with a branch and timestamp. Revisions can be compared, restored, and merged.
A set of components under version control may be branched or forked at a point in time so that, from that time forward, two copies of those components may develop at different speeds or in different ways independently of each other. At later point in time the changes made on one of these branches can be merged into the other.
Branches are organized into hierarchies like directories in file systems. A child branch has access to all of the information that is stored on its parent branch up until its baseTimestamp, which is the time the branch was created. Each repository has a predefined root branch, called MAIN
.
A commit represents a set of changes made against a branch in a repository. After a successful commit, the changes made by the commit are immediately available and searchable on the given branch.
A merge/rebase is an operation in which two sets of changes are applied to set of components. A merge/rebase always happens between two branches, denoting one as the source and the other as the target of the operation.
Now let's take a peek at our code systems:
The response:
...it sure looks empty! This is expected, as Snow Owl does not contain any predefined code system metadata out of the box. We can create the first code system with the following request:
Use of SNOMED CT is subject to additional conditions not listed here, and the full copyright notice has been shortened for brevity in the request above. Please see https://www.snomed.org/snomed-ct/get-snomed for details.
The request body includes:
The code system identifier "SNOMEDCT"
Various pieces of metadata offering a human-readable title, ownership and contact information, code system status, URL and OID for identification, etc.
The tooling identifier "snomed"
that points to the repository that will store content
Additional code system settings stored as key-value pairs
If everything goes well, the command will run without any errors (the server returns a "204 No Content" response). We can double-check that code system metadata has been registered correctly with the following request:
The expected response is:
In addition to the submitted values, you will find that additional administrative properties also appear in the output. One example is branchPath
which specifies the working branch of the code system within the repository.
Now that we have our instance up and running, the next step is to understand how to communicate with it. Fortunately, Snow Owl provides very comprehensive and powerful APIs to interact with your instance.
Among the few things that can be done with the API are as follows:
Check your instance health, status, and statistics
Administer your instance data
Perform CRUD (Create, Read, Update, and Delete) and search operations against your terminologies
Execute advanced search operations such as paging, sorting, filtering, scripting, aggregations, and many others
Snow Owl is both a simple and complex product. We’ve so far learned the basics of what it is, how to look inside of it, and how to work with it using some of the available APIs. Hopefully this tutorial has given you a better understanding of what Snow Owl is and more importantly, inspired you to further experiment with the rest of its great features!
This section includes information on how to setup Snow Owl and get it running, including:
Downloading
Installing
Starting
Configuring
Snow Owl is built using Java, and requires at least Java 11 in order to run. Only Oracle’s Java and the OpenJDK are supported. The same JVM version should be used on all Snow Owl nodes and clients.
We recommend installing Java version 11.0.x or a later version in the Java 11 release series. We recommend using a supported LTS version of Java.
The version of Java that Snow Owl will use can be configured by setting the JAVA_HOME environment variable.
Snow Owl® is a highly scalable, open source terminology server with revision-control capabilities and collaborative authoring platform features. It allows you to store, search and author high volumes of terminology artifacts quickly and efficiently. If you’d like to see Snow Owl in action, the Snowray Terminology Service™ provides a managed terminology server and high-quality terminology content management from your web browser.
Features include:
Revision-controlled authoring
Maintains multiple versions (including unpublished and published) for each terminology artifact and provides APIs to access them all
Independent work branches offer work-in-process isolation, external business workflow integration and team collaboration
SNOMED CT and others
SNOMED CT terminology support
RF2 Release File Specification as of 2023-09-01
Support for Relationships with concrete values
Official and Custom Reference Sets
Expression Constraint Language v2.1.0 spec, implementation
Compositional Grammar 2.3.1 spec, implementation
Expression Template Language 1.0.0 spec, implementation
With its modular design, the server can maintain multiple terminologies (including local codes, mapping sets, value sets)
Various set of APIs
SNOMED CT API (RESTful and native Java API)
FHIR API R4 v4.0.1 spec
CIS API 1.0 see reference implementation
Highly extensible and configurable
Simple to use plug-in system makes it easy to develop and add new terminology tooling/API or any other functionality
Built on top of Elasticsearch (highly scalable, distributed, open source search engine)
Connect to your existing cluster or use the embedded instance (supports up to Elasticsearch 8.x)
All the power of Elasticsearch is available (full-text search support, monitoring, analytics and many more)
This distribution only includes features licensed under the Apache 2.0 license. To get access to the full set of features, please contact B2i Healthcare.
View the detailed release notes here.
Not the version you're looking for? View past releases.
NOTE: You need to have version 17 of the JDK installed for local builds and running the development environment. Official releases include the runtime.
Once you have downloaded the appropriate package:
Run bin/snowowl.sh
on unix, or bin/snowowl.bat
on windows
Run curl http://localhost:8080/snowowl/info
to access server health status information
Run curl http://localhost:8080/snowowl/fhir/metadata
to access FHIR terminology capabilities
Navigate to http://localhost:8080/snowowl
to access the REST API documentation page
See more documentation at SNOMED CT API docs and at FHIR API docs
Snow Owl uses Maven for its build system. In order to create a distribution, simply run the following command in the cloned directory.
The distribution packages can be found in the releng/com.b2international.snowowl.server.update/target
folder, when the build is complete.
To run the test cases, use the following command:
These instructions will get Snow Owl up and running on your local machine for development and testing purposes.
Snow Owl is an Equinox-OSGi based server. To develop plug-ins for Snow Owl you need to use Eclipse as IDE:
Download Eclipse IDE for Eclipse Committers 2023-12 package from here: https://www.eclipse.org/downloads/packages/release/2023-12/r/eclipse-ide-eclipse-committers
Required Eclipse plug-ins in order (install the listed features via Help
→ Install New Software...
):
Note: you may have to untick the Show only the latest versions of the available software
checkbox to get older versions of a feature. Please use the exact version specified below, not the latest point release.
Groovy Development Tools (https://groovy.jfrog.io/ui/native/plugins-release/e4.30 or https://groovy.jfrog.io/artifactory/plugins-release-local/org/codehaus/groovy/groovy-eclipse-integration/5.2.0/e4.30)
Eclipse Groovy Development Tools - 5.2.0 (in category "Main Package")
MWE2 (https://download.eclipse.org/modeling/emft/mwe/updates/releases/2.16.0/)
MWE SDK 1.10.0 (MWE)
Xtext/Xtend (https://download.eclipse.org/modeling/tmf/xtext/updates/releases/2.33.0/)
Xtend IDE 2.33.0 (Xtext)
Xtext Complete SDK 2.33.0 (Xtext)
Make sure you have the following preferences enabled/disabled.
Plug-in development API baseline errors is set to Ignored (Preferences > Plug-in Development > API Baselines)
The Plugin execution not covered by lifecycle configuration: org.apache.maven.plugins:maven-clean-plugin:2.5:clean type of errors can be ignored or changed to Warnings in Preferences→Maven→Errors/Warnings.
Set the workspace encoding to UTF-8 (Preferences→General→Workspace)
Set the line endings to Unix style (Preferences→General→Workspace)
Set the number of imports and static imports wildcard limit to 5 (Preferences→Java→Code Style→Organize Imports)
Make sure the Git line endings are set to input (Preferences→Team→Git→Configuration - add key if missing core.autocrlf = input)
Make sure the settings.xml
in your ~/.m2/settings.xml location is updated with the content from the settings.xml
from this repository's root folder.
Import all projects into your Eclipse workspace and wait for the build to complete
Select all projects and hit Alt
+ F5
and trigger an update to all Maven projects manually (to download dependencies from Maven)
Open the target-platform/target-platform.target
file
Wait until Eclipse resolves the target platform (click on the Resolve
button if it refuses to do so) and then click on Set as Active Target platform
Wait until the build is complete and you have no compile errors
Launch snow-owl-oss
launch configuration in the Run Configurations menu
Navigate to http://localhost:8080/snowowl
Please see CONTRIBUTING.md for details.
Our releases use semantic versioning. You can find a chronologically ordered list of notable changes in CHANGELOG.md.
This project is licensed under the Apache 2.0 License. See LICENSE for details and refer to NOTICE for additional licensing notes and uses of third-party components.
In March 2015, SNOMED International generously licensed the Snow Owl Terminology Server components supporting SNOMED CT. They subsequently made the licensed code available to their members and the global community under an open-source license.
In March 2017, NHS Digital licensed the Snow Owl Terminology Server to support the mandatory adoption of SNOMED CT throughout all care settings in the United Kingdom by April 2020. In addition to driving the UK’s clinical terminology efforts by providing a platform to author national clinical codes, Snow Owl will support the maintenance and improvement of the dm+d drug extension which alone is used in over 156 million electronic prescriptions per month. Improvements to the terminology server made under this agreement will be made available to the global community.
Many other organizations have directly and indirectly contributed to Snow Owl, including: Singapore Ministry of Health; American Dental Association; University of Nebraska Medical Center (USA); Federal Public Service of Public Health (Belgium); Danish Health Data Authority; Health and Welfare Information Systems Centre (Estonia); Department of Health (Ireland); New Zealand Ministry of Health; Norwegian Directorate of eHealth; Integrated Health Information Systems (Singapore); National Board of Health and Welfare (Sweden); eHealth Suisse (Switzerland); and the National Library of Medicine (USA).
The RPM for Snow Owl can be downloaded from the Downloads section. It can be used to install Snow Owl on any RPM-based system such as OpenSuSE, SLES, Centos, Red Hat, and Oracle Enterprise.
RPM install is not supported on distributions with old versions of RPM, such as SLES 11 and CentOS 5. Please see instead.
On systemd-based distributions, the installation scripts will attempt to set kernel parameters (e.g., vm.max_map_count
); you can skip this by masking the systemd-sysctl.service unit.
Use the chkconfig command to configure Snow Owl to start automatically when the system boots up:
Snow Owl can be started and stopped using the service command:
If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/
.
To configure Snow Owl to start automatically when the system boots up, run the following commands:
Snow Owl can be started and stopped as follows:
These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/
.
You can test that your Snow Owl instance is running by sending an HTTP request to:
which should give you a response something like this:
Snow Owl defaults to using /etc/snowowl
for runtime configuration. The ownership of this directory and all files in this directory are set to root:snowowl
on package installation and the directory has the setgid
flag set so that any files and subdirectories created under /etc/snowowl
are created with this ownership as well (e.g., if a keystore is created using the keystore tool). It is expected that this be maintained so that the Snow Owl process can read the files under this directory via the group permissions.
The RPM places config files, logs, and the data directory in the appropriate locations for an RPM-based system:
You now have a test Snow Owl environment set up. Before you start serious development or go into production with Snow Owl, you must do some additional setup:
Let's import an RF2 release in SNAPSHOT
mode so that we can further explore the available SNOMED CT APIs! To do so, use the appropriate request from the as follows (the second SNOMEDCT
in the request path represents the code system identifier):
Curl will display the entire interaction between it and the server, including many request and response headers. We are interested in these two (response) rows in particular:
The first one indicates that the file was uploaded successfully and a resource has been created to track import progress, while the second row indicates the location of this resource.
Depending on the size and type of the RF2 package, hardware and Snow Owl configuration, RF2 imports might take hours to complete. Official SNAPSHOT distributions can be imported in less than 30 minutes by allocating 6 GB of heap size to Snow Owl and configuring it to use a solid state disk for the data directory.
The process itself is asynchronous and its status can be checked by periodically sending a GET request to the location indicated by the response header:
The expected response while the import is running:
Upon completion, you should receive a different response which lists component identifiers visited during the import as well as any defects encountered in uploaded release files:
Snow Owl is provided in the following package formats:
Snow Owl is provided as a .zip
and as a .tar.gz
package. These packages can be used to install Snow Owl on any system and are the easiest package format to use when trying out Snow Owl.
The latest stable version of Snow Owl can be found on the page.
Snow Owl requires Java 11 or newer version. Use the official Oracle distribution or an open-source distribution such as OpenJDK.
zip
packageThe .zip
archive for Snow Owl can be downloaded and installed as follows:
.tar.gz
packageThe .tar.gz
archive for Snow Owl can be downloaded and installed as follows:
Snow Owl can be started from the command line as follows:
By default, Snow Owl runs in the foreground, prints its logs to the standard output (stdout), and can be stopped by pressing Ctrl-C.
All scripts packaged with Snow Owl assume that Bash is available at /bin/bash. As such, Bash should be available at this path either directly or via a symbolic link.
You can test that your instance is running by sending an HTTP request to Snow Owl's status endpoint:
which should give you a response like this:
You can send the Snow Owl process to the background using the combination of nohup
and the &
character:
Log messages can be found in the $SO_HOME/serviceability/logs/
directory.
To shut down Snow Owl, you can kill the process ID directly:
or using the provided shutdown script:
.zip
and .tar.gz
archives:The .zip
and .tar.gz
packages are entirely self-contained. All files and directories are, by default, contained within $SO_HOME
 — the directory created when unpacking the archive.
This is very convenient because you don’t have to create any directories to start using Snow Owl, and uninstalling Snow Owl is as easy as removing the $SO_HOME
directory. However, it is advisable to change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on.
You now have a test Snow Owl environment set up. Before you start serious development or go into production with Snow Owl, you must do some additional setup:
Snow Owl loads its configuration from the /etc/snowowl/snowowl.yml
file by default. The format of this config file is explained in .
Learn how to .
Configure .
Configure .
Learn how to .
Configure .
Configure .
home
Snow Owl home directory or $SO_HOME
/usr/share/snowowl
bin
Binary scripts including startup/shutdown to start/stop the instance
/usr/share/snowowl/bin
conf
Configuration files including snowowl.yml
/etc/snowowl
data
The location of the data files and resources.
/var/lib/snowowl
path.data
logs
Log files location.
/var/log/snowowl
Package
Description
zip/tar.gz
The zip
and tar.gz
packages are suitable for installation on any system and are the easiest choice for getting started with Snow Owl on most systems.
Install Snow Owl with tar.gz
or zip
rpm
The rpm
package is suitable for installation on Red Hat, Centos, SLES, OpenSuSE and other RPM-based systems. RPMs may be downloaded from the Downloads section.
Install Snow Owl with RPM
deb
The deb
package is suitable for Debian, Ubuntu, and other Debian-based systems. Debian packages may be downloaded from the Downloads section.
Install Snow Owl with Debian Package
docker
Images are available for running Snow Owl as Docker containers. They may be downloaded from the official Docker Hub Registry. Install Snow Owl with Docker
home
Snow Owl home directory or $SO_HOME
Directory created by unpacking the archive
bin
Binary scripts including startup/shutdown to start/stop the instance
$SO_HOME/bin
conf
Configuration files including snowowl.yml
$SO_HOME/configuration
data
The location of the data files and resources.
$SO_HOME/resources
path.data
logs
Log files location.
$SO_HOME/serviceability/logs
Snow Owl ships with good defaults and requires very little configuration.
Snow Owl has three configuration files:
snowowl.yml
for configuring Snow Owl
serviceability.xml
for configuring Snow Owl logging
elasticsearch.yml
for configuring the underlying Elasticsearch instance in case of embedded deployments
These files are located in the config directory, whose default location depends on whether or not the installation is from an archive distribution (tar.gz
or zip
) or a package distribution (Debian or RPM packages).
For the archive distributions, the config directory location defaults to $SO_PATH_HOME/configuration
. The location of the config directory can be changed via the SO_PATH_CONF
environment variable as follows:
Alternatively, you can export the SO_PATH_CONF
environment variable via the command line or via your shell profile.
For the package distributions, the config directory location defaults to /etc/snowowl
. The location of the config directory can also be changed via the SO_PATH_CONF
environment variable, but note that setting this in your shell is not sufficient. Instead, this variable is sourced from /etc/default/snowowl
(for the Debian package) and /etc/sysconfig/snowowl
(for the RPM package). You will need to edit the SO_PATH_CONF=/etc/snowowl
entry in one of these files accordingly to change the config directory location.
The configuration format is YAML. Here is an example of changing the path of the data directory:
Settings can also be flattened as follows:
Environment variables referenced with the ${...}
notation within the configuration file will be replaced with the value of the environment variable, for instance:
Snow Owl uses SLF4J and Logback for logging.
The logging configuration file (serviceability.xml
) can be used to configure Snow Owl logging. The logging configuration file location depends on your installation method, by default it is located in the ${SO_HOME}/configuration
folder.
Extensive information on how to customize logging and all the supported appenders can be found on the Logback documentation.
By default, Snow Owl is starting and connecting to an embedded Elasticsearch
cluster available on http://localhost:9200
. This cluster has only a single node and its discovery method is set to single-node
, which means it is not able to connect to other Elasticsearch clusters and will be used exclusively by Snow Owl.
This single node Elasticsearch cluster can easily serve Snow Owl in testing, evaluation and small authoring environments, but it is recommended to customize how Snow Owl connects to an Elasticsearch cluster in larger environments (especially when planning to scale with user demand).
You have two options to configure Elasticsearch used by Snow Owl.
The first option is to configure the underlying Elasticsearch instance by editing the configuration file elasticsearch.yml
, which depending on your installation is available in the configuration directory (you can create the file, if it is not available, Snow Owl will pick it up during the next startup).
The embedded Elasticsearch version is 6.3.2
. If you are configuring it to connect to an existing Elasticsearch cluster, then make sure that the cluster version matches with this version.
The second option is to configure Snow Owl to use a remote Elasticsearch cluster without the embedded instance. In order to use this feature you need to set the repository.index.clusterUrl
configuration parameter to the remote address of your Elasticsearch cluster. When Snow Owl is configured to connect to a remote Elasticsearch cluster, it won't boot up the embedded instance, which reduces the memory requirements of Snow Owl slightly.
You can connect to self-hosted clusters or hosted solutions provided by AWS and Elastic.co for example.
You should rarely need to change Java Virtual Machine (JVM) options. If you do, the most likely change is setting the heap size.
The preferred method of setting JVM options (including system properties and JVM flags) is via the the SO_JAVA_OPTS
environment variable. For instance:
When using the RPM or Debian packages, SO_JAVA_OPTS
can be specified in the system configuration file.
Some other Java programs support the JAVA_OPTS
environment variable. This is not a mechanism built into the JVM but instead a convention in the ecosystem. However, we do not support this environment variable, instead supporting setting JVM options via the environment variable SO_JAVA_OPTS
as above.
Snow Owl is also available as Docker images. The images use centos:7 as the base image.
A list of all published Docker images and tags is available at Docker Hub.
These images are free to use under the Apache 2.0 license. They contain open source features only.
Obtaining Snow Owl for Docker is as simple as issuing a docker pull
command against the Docker Hub registry.
Snow Owl can be quickly started for development or testing use with the following command:
The vm.max_map_count
kernel setting needs to be set to at least 262144
permanently in /etc/sysctl.conf
for production use. To apply the setting on a live system type: sysctl -w vm.max_map_count=262144
The following example brings up Snow Owl instance with its dedicated Elasticsearch node. To bring up the cluster, use the docker-compose.yml and just type:
docker-compose
is not pre-installed with Docker on Linux. Instructions for installing it can be found on the Docker Compose webpage.
The node snowowl
listens on localhost:8080
while it talks to the elasticsearch
node over a Docker network.
To stop the cluster, type docker-compose down
. Data volumes/mounts will persist, so it's possible to start the stack again with the same data using docker-compose up`.
Snow Owl loads its configuration from files under /usr/share/snowowl/config/
. These configuration files are documented in the Configure Snow Owl pages.
The image offers several methods for configuring Snow Owl settings with the conventional approach being to provide customized files, that is to say, snowowl.yml
. It's also possible to use environment variables to set options:
A. Bind-mounted configuration Create your custom config file and mount this over the image's corresponding file. For example, bind-mounting a custom_snowowl.yml
with docker run
can be accomplished with the parameter:
The container runs Snow Owl as user snowowl
using uid:gid 1000:1000
. Bind mounted host directories and files, such as custom_snowowl.yml
above, need to be accessible by this user. For the mounted data and log dirs, such as /usr/share/snowowl/resources
, write access is required as well.
B. Customized image In some environments, it may make more sense to prepare a custom image containing your configuration. A Dockerfile
to achieve this may be as simple as:
You could then build and try the image with something like:
We have collected a number of best practices for production use. Any Docker parameters mentioned below assume the use of docker run
.
By default, Snow Owl runs inside the container as user snowowl
using uid:gid 1000:1000
.
If you are bind-mounting a local directory or file, ensure it is readable by this user, while the <<path-settings,data and log dirs>> additionally require write access. A good strategy is to grant group access to gid 1000
or 0
for the local directory. As an example, to prepare a local directory for storing data through a bind-mount:
It is important to ensure increased ulimits for nofile
and nproc
are available for the Snow Owl containers. Verify the init system for the Docker daemon is already setting those to acceptable values and, if needed, adjust them in the Daemon, or override them per container, for example using docker run
:
NOTE: One way of checking the Docker daemon defaults for the aforementioned ulimits is by running:
Swapping needs to be disabled for performance and stability. This can be achieved through any of the methods mentioned in the system settings.
The image exposes TCP ports 8080 and 2036.
Use the SO_JAVA_OPTS
environment variable to set heap size. For example, to use 16GB use SO_JAVA_OPTS="-Xms16g -Xmx16g"
with docker run
.
Pin your deployments to a specific version of the Snow Owl OSS Docker image. For example, snow-owl-oss:7.2.0
.
Consider centralizing your logs by using a different https://docs.docker.com/engine/admin/logging/overview/[logging driver]. Also note that the default json-file logging driver is not ideally suited for production use.
This is only relevant if you are running Snow Owl with an embedded Elasticsearch and not connecting it to an existing cluster.
Snow Owl (with embedded Elasticsearch) uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Snow Owl to 65,536 or higher.
For the .zip
and .tar.gz
packages, set ulimit -n 65536
as root before starting Snow Owl, or set nofile
to 65536
in /etc/security/limits.conf
.
RPM and Debian packages already default the maximum number of file descriptors to 65536
and do not require further configuration.
The Debian package for Snow Owl can be downloaded from the Downloads section. It can be used to install Snow Owl on any Debian-based system such as Debian and Ubuntu.
Use the update-rc.d command to configure Snow Owl to start automatically when the system boots up:
Snow Owl can be started and stopped using the service command:
If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/
.
To configure Snow Owl to start automatically when the system boots up, run the following commands:
Snow Owl can be started and stopped as follows:
These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/
.
You can test that your Snow Owl instance is running by sending an HTTP request to:
which should give you a response something like this:
Snow Owl defaults to using /etc/snowowl
for runtime configuration. The ownership of this directory and all files in this directory are set to root:snowowl
on package installation and the directory has the setgid
flag set so that any files and subdirectories created under /etc/snowowl
are created with this ownership as well (e.g., if a keystore is created using the keystore tool). It is expected that this be maintained so that the Snow Owl process can read the files under this directory via the group permissions.
Snow Owl loads its configuration from the /etc/snowowl/snowowl.yml
file by default. The format of this config file is explained in Configuring Snow Owl.
NOTE: Distributions that use systemd
require that system resource limits be configured via systemd
rather than via the /etc/sysconfig/snowowl
file.
The Debian package places config files, logs, and the data directory in the appropriate locations for a Debian-based system:
home
Snow Owl home directory or $SO_HOME
/usr/share/snowowl
bin
Binary scripts including startup/shutdown to start/stop the instance
/usr/share/snowowl/bin
conf
Configuration files including snowowl.yml
/etc/snowowl
data
The location of the data files and resources.
/var/lib/snowowl
path.data
logs
Log files location.
/var/log/snowowl
You now have a test Snow Owl environment set up. Before you start serious development or go into production with Snow Owl, you must do some additional setup:
Learn how to configure Snow Owl.
Configure important Snow Owl settings.
Configure important system settings.
Most operating systems try to use as much memory as possible for file system caches and eagerly swap out unused application memory. This can result in parts of the JVM heap or even its executable pages being swapped out to disk.
Swapping is very bad for performance, and should be avoided at all costs. It can cause garbage collections to last for minutes instead of milliseconds and can cause services to respond slowly or even time out.
There are two approaches to disabling swapping. The preferred option is to completely disable swap, but if this is not an option, you can minimize swappiness.
Usually Snow Owl is the only service running on a box, and its memory usage is controlled by the JVM options. There should be no need to have swap enabled.
On Linux systems, you can disable swap temporarily by running:
To disable it permanently, you will need to edit the /etc/fstab
file and comment out any lines that contain the word swap
.
Another option available on Linux systems is to ensure that the sysctl value vm.swappiness
is set to 1. This reduces the kernel’s tendency to swap and should not lead to swapping under normal circumstances, while still allowing the whole system to swap in emergency conditions.
While Snow Owl requires very little configuration, there are a number of settings which need to be considered before going into production.
The following settings must be considered before going to production:
By default, Snow Owl includes the OSS version of Elasticsearch and runs it in embedded mode to store terminology data and make it available for search. This is convenient for single node environments (eg. for evaluation, testing and development), but it might not be sufficient when you go into production.
To configure Snow Owl to connect to an Elasticsearch cluster, change the clusterUrl
property in the snowowl.yml
configuration file:
The value for this setting should be a valid HTTP URL point to the HTTP API of your Elasticsearch cluster, which by default runs on port 9200
.
If you are using the .zip
or .tar.gz
archives, the data and logs directories are sub-folders of $SO_HOME
. If these important folders are left in their default locations, there is a high risk of them being deleted while upgrading Snow Owl to a new version.
In production use, you will almost certainly want to change the locations of the data and log folders.
The RPM and Debian distributions already use custom paths for data and logs.
To allow clients to connect to Snow Owl, make sure you open access to the following ports:
8080/TCP:: Used by Snow Owl Server's REST API for HTTP access
8443/TCP:: Used by Snow Owl Server's REST API for HTTPS access
2036/TCP:: Used by the Net4J binary protocol connecting Snow Owl clients to the server
By default, Snow Owl tells the JVM to use a heap with a minimum and maximum size of 2 GB. When moving to production, it is important to configure heap size to ensure that Snow Owl has enough heap available.
To configure the heap size settings, change the -Xms
and -Xmx
settings in the SO_JAVA_OPTS
environment variable.
The value for these setting depends on the amount of RAM available on your server and whether you are running Elasticsearch on the some node as Snow Owl (either embedded or as a service) or running it in its own cluster. Good rules of thumb are:
Set the minimum heap size (Xms
) and maximum heap size (Xmx
) to be equal to each other.
Too much heap can subject to long garbage collection pauses.
Set Xmx
to no more than 50% of your physical RAM, to ensure that there is enough physical RAM left for kernel file system caches.
Snow Owl connecting to a remote Elasticsearch cluster requires less memory, but make sure you still allocate enough for your use cases (classification, batch processing, etc.).
Snow Owl uses a mmapfs
directory by default to store its data. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions.
On Linux, you can increase the limits by running the following command as root:
To set this value permanently, update the vm.max_map_count
setting in /etc/sysctl.conf
. To verify after rebooting, run sysctl vm.max_map_count
.
The RPM and Debian packages will configure this setting automatically. No further configuration is required.
Ideally, Snow Owl should run alone on a server and use all of the resources available to it. In order to do so, you need to configure your operating system to allow the user running Snow Owl to access more resources than allowed by default.
The following settings must be considered before going to production:
Where to configure systems settings depends on which package you have used to install Snow Owl, and which operating system you are using.
When using the .zip
or .tar.gz
packages, system settings can be configured:
temporarily with ulimit, or
permanently in /etc/security/limits.conf.
When using the RPM or Debian packages, most system settings are set in the system configuration file. However, systems which use systemd require that system limits are specified in a systemd configuration file.
On Linux systems, ulimit
can be used to change resource limits on a temporary basis. Limits usually need to be set as root before switching to the user that will run Snow Owl. For example, to set the number of open file handles (ulimit -n
) to 65,536
, you can do the following:
The new limit is only applied during the current session.
You can consult all currently applied limits with ulimit -a
.
On Linux systems, persistent limits can be set for a particular user by editing the /etc/security/limits.conf
file. To set the maximum number of open files for the snowowl
user to 65,536
, add the following line to the limits.conf file:
This change will only take effect the next time the snowowl
user opens a new session.
When using the RPM or Debian packages, system settings and environment variables can be specified in the system configuration file, which is located in:
Package
Location
RPM
/etc/sysconfig/snowowl
Debian
/etc/default/snowowl
However, for systems which uses systemd, system limits need to be specified via systemd.
When using the RPM or Debian packages on systems that use systemd, system limits must be specified via systemd.
The systemd service file (/usr/lib/systemd/system/snowowl.service) contains the limits that are applied by default.
To override them, add a file called /etc/systemd/system/snowowl.service.d/override.conf (alternatively, you may run sudo systemctl edit snowowl
which opens the file automatically inside your default editor). Set any changes in this file, such as:
Once finished, run the following command to reload units:
An orderly shutdown of Snow Owl ensures that Snow Owl has a chance to cleanup and close outstanding resources. For example, an instance that is shutdown in an orderly fashion will initiate an orderly shutdown of the embedded Elasticsearch instance, gracefully close and disconnect connections and perform other related cleanup activities. You can help ensure an orderly shutdown by properly stopping Snow Owl.
If you’re running Snow Owl as a service, you can stop Snow Owl via the service management functionality provided by your installation.
If you’re running Snow Owl directly, you can stop Snow Owl by sending Ctrl-C
if you’re running Snow Owl in the console, or by invoking the provided shutdown
script as follows:
The method for starting Snow Owl varies depending on how you installed it.
If you installed Snow Owl with a .tar.gz
or zip
package, you can start Snow Owl from the command line.
Snow Owl can be started from the command line as follows:
By default, Snow Owl runs in the foreground, prints some of its logs to the standard output (stdout
), and can be stopped by pressing Ctrl-C
.
All scripts packaged with Snow Owl assume that Bash is available at /bin/bash. As such, Bash should be available at this path either directly or via a symbolic link.
To run Snow Owl as a daemon, use the following command:
Log messages can be found in the $SO_HOME/serviceability/logs/
directory.
The startup scripts provided in the RPM and Debian packages take care of starting and stopping the Snow Owl process for you.
Snow Owl is not started automatically after installation. How to start and stop Snow Owl depends on whether your system uses SysV init
or systemd
(used by newer distributions). You can tell which is being used by running this command:
Use the chkconfig
command to configure Snow Owl to start automatically when the system boots up:
Snow Owl can be started and stopped using the service command:
If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/
.
To configure Snow Owl to start automatically when the system boots up, run the following commands:
Snow Owl can be started and stopped as follows:
These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/
.
Snow Owl uses a number of thread pools for different types of operations. It is important that it is able to create new threads whenever needed. Make sure that the number of threads that the Snow Owl user can create is at least 4096
.
This can be done by setting ulimit -u 4096
as root before starting Snow Owl, or by setting nproc
to 4096
in /etc/security/limits.conf
.
The package distributions when run as services under systemd will configure the number of threads for the Snow Owl process automatically. No additional configuration is required.
Snow Owl security features enable you to easily secure your terminology server. You can password-protect your data as well as implement more advanced security measures such as role-based access control and auditing.
By default Snow Owl comes without any security features enabled and all read and write operations are unprotected. To configure a security realm, you can choose from the following built-in identity providers:
After configuring at least one security realm, Snow Owl will authenticate all incoming requests to ensure that the sender of the request is allowed to access the terminology server and its contents. To authenticate a request, the client must send an HTTP Basic
or Bearer
Authorization header with the request. The value should be a user/pass pair in case of using Basic
authentication or a JWT token generated by Snow Owl if using the Bearer
method.
NOTE: It is recommended in production environments that all communication between a client and Snow Owl is performed through a secure connection.
Snow Owl sends an HTTP 401 Unauthorized
response if a request needs to be authenticated.
If supported by the security realm, Snow Owl will also check whether an authenticated user is permitted to perform the requested action on a given resource.
Within an organization, roles are created for various job functions. The permissions to perform certain operations are assigned to specific roles. Members, staff or other system users are assigned particular roles, and through those role assignments acquire the permissions needed to perform particular system functions. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user, or changing a user's department.
Role assignment: A subject can exercise a permission only if the subject has selected or been assigned a role.
Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject's active role.
With rules 1 and 2, it is ensured that users can exercise only permissions for which they are authorized.
S = Subject = A person or automated agent R = Role = Job function or title which defines an authority level P = Permissions = An approval of a mode of access to a resource
In Snow Owl a permission is a single value that represents both the operation the user would like to perform and the resource that is being accessed. The format is the following: <operation>:<resource>
Currently there are 7
operations supported by Snow Owl:
browse
- read the contents of a resource
edit
- write the contents of the resource, delete the resource
import
- import from external content and formats
export
- export to external content and formats
version
- create a version in a Code System, create a release
promote
- merge content from isolated branch environments to a Code System's development version
classify
- run classifiers and save their results
Resources represent the content that is being accessed by a client. A resource can be anything that can be resolved to a database entry. Currently, the following resource formats are allowed to be used in a permission:
<repositoryId>
- access the entire content available in a terminology repository
<repositoryId>/<branch>
- access the content available on a branch in a terminology repository
<codeSystemId>
- access all content of a Code System, including both the latest development and all previous releases
<codeSystemId>/<versionId>
- access a specific release of a Code System
There is a special *
wild card character that can be used for both the operation and resource parts in a permission value to allow any operation to be performed on any or selected resources, or to allow certain operations to be performed on any available resources.
Examples:
browse:snomedStore
- browse all SNOMED CT Code Systems and their content
edit:SNOMEDCT-UK-CL
- edit the SNOMEDCT-UK-CL
Code System
export:SNOMEDCT-US/2019-03-01
- export the 2019-03-01
US Extension release
*:SNOMEDCT
- allow any operations to be performed on the SNOMEDCT
Code System
browse:*
- allow read operations on all available resources
*:*
- administrator permission, the user can do anything with any of the available resources
To configure authorization, please consult the security realm specific documentation:
You can manage and authenticate users with the built-in file realm. All the data about the users for the file realm is stored in the users
file. The file is located in SO_PATH_CONF
and is read on startup.
You need to explicitly select the file realm in the snowowl.yml
configuration file in order to use it for authentication.
In the above configuration the file realm is using the users
file to read your users from. Each row in the file represents a username and password delimited by :
character. The passwords are BCrypt encrypted hashes. The default users
file comes with a default snowowl
user with the default snowowl
password.
To simplify file realm configuration, the Snow Owl CLI comes with a command to add a user to the file realm (snowowl users add
). See the command help manual (-h
option) for further details.
The file security realm does NOT support the Authorization formats at the moment. If you are interested in configuring role-based access control for your users, it is recommended to switch to the .
You can configure security to communicate with a Lightweight Directory Access Protocol (LDAP) server to authenticate and authorize users.
To integrate with LDAP, you configure an ldap
realm in the snowowl.yml
configuration file.
The following configuration settings are supported:
uri
The LDAP URI that points to the LDAP/AD server to connect to.
bindDn
The user's DN who has access to the entire baseDn
and roleBaseDn
and can read content from it.
bindDnPassword
The password of the bindDn
user.
baseDn
The base directory where all entries in the entire subtree will be considered as potential matches for all searches.
roleBaseDn
Alternative base directory where all role entries in the entire subtree will be considered. Defaults to the baseDn
value.
userFilter
The search filter to search for user entries under the configured baseDn
. Defaults to (objectClass={userObjectClass})
.
roleFilter
The search filter to search for role entries under the configured roleBaseDn
. Defaults to (objectClass={roleObjectClass})
.
userObjectClass
The user object's class to look for when searching for user entries. Defaults to inetOrgPerson
class.
roleObjectClass
The role object's class to look for when searching for role entries. Defaults to groupOfUniqueNames
class.
userIdProperty
The userId property to access and read for the user's unique identifier. Usually their username or email address. Defaults to uid
property.
permissionProperty
A multi-valued property that is used to store permission information on a role. Defaults to the description
property.
memberProperty
A multi-valued property that is used to store and retrieve user dn
s that belong to a given role. Defaults to the uniqueMember
property.
The default configuration values are selected to support both OpenLDAP and Active Directory without needing to customize the default schema that comes with their default installation.
When users send their username and password with their request in the Authorization header, the LDAP security realm performs the following steps to authenticate the user:
Searches for a user entry in the configured baseDn
to get the DN
Authenticates with the LDAP instance using the received DN
and the provided password
If any of the above-mentioned steps fails for any reason, the user is not allowed to access the terminology server's content and the server will respond with HTTP 401 Unauthorized
.
To configure authentication, you need to configure the uri
, baseDn
, bindDn
, bindDnPassword
, userObjectClass
and userIdProperty
configuration settings.
To add a user in the LDAP realm, create an entry under the specified baseDn
using the configured userObjectClass
as class and the userIdProperty
as the property where the user's username/e-mail address is configured.
Example user entry:
On top of the authentication part, the LDAP realm provides configuration values to support full role-based access control and authorization.
When a user's request is successfully authenticated with the LDAP realm, Snow Owl authorizes the request using the user's currently set roles and permissions in the configured LDAP instance.
To add a role in the LDAP realm, create an entry under the specified baseDn
using the configured roleObjectClass
as class and the configured permissionProperty
and memberProperty
properties for permission and user mappings, respectively.
Example read-only role:
Coming soon!
This section describes the use case scenarios present in the world of SNOMED CT and how Snow Owl can be used in those scenarios to maximize its full potential. Each scenario comes with a summary and a pros/cons section to help your decision making process when selecting the appropriate scenario for your use case.
Snow Owl is a multi-purpose terminology server with a main focus on SNOMED CT International Edition and its Extensions. Whether you are a producer of a SNOMED CT Extension or a consumer of one, Snow Owl has you covered. As always, feel free to ask your questions regarding any of the content you read here (raise a ticket on ).
Snow Owl uses the following basic concepts to provide authoring and maintenance support for SNOMED CT Extensions.
From the page, we've learned what is a Repository and how Code Systems are defined as part of a Repository.
Reminder: a Repository is a set of schemas and functionality to provide support for a dedicated set of Code Systems, eg. the SNOMED CT Repository stores all SNOMED CT related components under revision control and provides quick access). A Repository can contain one or more Code Systems and by default always comes with one predefined Code System, the root Code System (in case of SNOMED CT, this is often represents the International Edition).
SNOMED CT Extensions in Snow Owl are basically Code Systems with their own set of properties and characteristics. With Snow Owl's Code System API, a Code System can be created for each SNOMED CT Extension to easily identify the Code System and its components with a single unique identifer, called the Code System short name. The recommended naming approach when selecting the unique short name identifier is the following:
SNOMED CT International Edition: SNOMEDCT
- often included in other editions for distribution purposes
National Release Center (single maintained extension) - SNOMEDCT-US
- represents the SNOMED CT United States of America Extension
National Release Center (multiple maintained extensions) - SNOMEDCT-UK-CL
, SNOMEDCT-UK-DR
- United Kingdom Clinical and Drug Extensions, respectively
Care Provider with a special extension based on a national extension - SNOMEDCT-US-UNMC
- University of Nebraska Medical Center's extension builds on top of the SNOMEDCT-US
extension
The primary namespace identifer and set of modules and languages can be set during the creation of the Code System, and can be updated later on if required. These properties can be used when users are accessing the terminology server for authoring purposes to provide a seamless authoring experience for the user without them needing to worry about selecting the proper namespace, modules, language tags, etc. (NOTE: this feature is not available yet in the OSS version of Snow Owl)
A Snow Owl Code System can be marked as an extensionOf
another Code System, which ties them together, forming a dependency between the two Code Systems. A Code System can have multiple Extension Code Systems, but a Code System can only be extensionOf
a single Code System.
In Snow Owl, a Repository maintains a set of branches and Code Systems are always attached to a dedicated branch. For example, the default root Code Systems are always tied to the default branch, called MAIN
. When creating a new Code System, the "working" branchPath
can be specified and doing so assigns the branch to the Code System. A Code System cannot be attached to multiple branches at the same time, and a branch can only be assigned to a single Code System in a Repository. Snow Owl's branching infrastructure allows the use of isolated environments for both distribution and authoring workflows, therefore they play crucial role in SNOMED CT Extension managament as well. They also provide the support for seamless upgrade mechanism, which can be done whenever there is a new version available in one of your SNOMED CT Extension's dependent Code Systems.
As in real life, a Code System can have zero or more versions (or with another name, releases). A version is a special branch that is created during the versioning process and makes the currently available latest content accessible later in its current form. Since SNOMED CT Extensions can have releases as well, creating a Code System Version in Snow Owl is a must in order to produce the release packages.
The following image shows the repository content rendered from the available commits, after a successful International Edition import.
Dots represent commits made with the commit message on the right. Green boxes represent where the associated branch's HEAD
is currently located. Blue tag labels represent versions created during the commit.
If your use case would be to import the SNOMED CT US Extension 2019-09-01 version into this repository, then ideally it would look like this:
The next section describes the use case scenarios in the world of SNOMED CT and the recommended approaches for deploying these scenarios in Snow Owl.
The Snow Owl Terminology Server is capable of managing multiple SNOMED CT extensions for both distribution and authoring purposes in a single deployment. This guide describes the typical scenarios, like creating, managing, releasing and upgrading SNOMED CT Extensions in great detail with images. If you are unfamiliar with SNOMED CT Extensions, the next section walks you through their logical model and basic characteristics, while the following pages describe distribution and authoring scenarios as well as how to use the Snow Owl Terminology Server for SNOMED CT Extensions.
The official SNOMED CT Extension Practical Guide has been used to help produce the content available on this page: https://confluence.ihtsdotools.org/display/DOCEXTPG
SNOMED CT is a multilingual clinical terminology that covers a broad scope. However, some users may need additional concepts, relationships, descriptions or reference sets to support national, local or organizational needs.
The extension mechanism allows SNOMED CT to be customized to address the terminology needs of a country or organization that are not met by the International Edition.
A SNOMED CT Extension may contain components and/or derivatives (e.g. reference sets used to represent subsets, maps or language preferences). Since the international edition and all extensions share a common structure, the same application software can be used to enter, store and process information from different extensions. Similarly, reference sets can be constructed to refer to content from both the international release and extensions. The common structure also makes it easier for content developed by an extension producer to be submitted for possible inclusion in a National Edition or the International Edition.
Therefore, a SNOMED CT Extension uses the same Release Format version 2 as the International Edition, they share a common structure and schema (see ).
Extensions are managed by SNOMED International, and Members or Affiliate Licensees who have been issued a namespace identifier by SNOMED International. A namespace identifier is used to create globally unique SNOMED CT identifiers for each component (i.e. concept, description and relationship) within a Member or Affiliate extension. This ensures that references to extension concepts contained in health record data are unambiguous and can be clearly attributed to a specific issuing organization.
A national or local extension uses a namespace identifier issued by SNOMED International to ensure that all extension components can be uniquely identified (across all extensions).
Therefore, a SNOMED CT Extension uses a single namespace identifier to identify all core components in the SNOMED CT Extension (see ).
Every SNOMED CT Extension includes one or more modules, and each module contains either SNOMED CT components or reference sets (or both). Modules may be dependent on other modules. A SNOMED CT Edition includes the contents of a focus module together with the contents of all the modules on which it depends. This includes the modules in the International Edition and possibly other modules from a national and/or local extension.
An edition is defined based on a single focus module. This focus module must be the most dependent module, in that the focus module is dependent on all the other modules in the edition.
SNOMED CT extensions can support a variety of use cases, including:
Translating SNOMED CT, for example
Adding terms used in a local language or dialect
Adding terms used by a specific user group, such as patient-friendly terms
Representing language, dialect or specialty-specific term preferences is possible using a SNOMED CT extension. The logical design of SNOMED CT enables a single clinical idea to be associated with a range of terms or phrases from various languages, as depicted in Figure 3.1-1 below. In an extension, terms relevant for a particular country, speciality, hospital (or other organization) may be created, and different options for term preferences may be specified. Even within the same country, different regional dialects or specialty-specific languages exist may influence which synonyms are preferred. SNOMED CT supports this level of granularity for language preferences at the national or local level.
A SNOMED CT extension is a set of components and reference set members that add to the SNOMED CT International Edition. An extension is created, structured, maintained and distributed in accordance with SNOMED CT specifications and guidelines. Unlike, the International Edition an extension is not a standalone terminology. The content in an extension depends on the SNOMED CT International Edition, and must be used together with the International Edition and any other extension module on which it depends.
A specific version of an extension can be referred to using the date on which the extension was published.
There are many use cases that require a date specific version of an edition, including specifying the substrate of a SNOMED CT query, and specifying the version of SNOMED CT used to code a specific data element in a health record. A versioned edition includes the contents of the specified version of the focus module, plus the contents of all versioned modules on which the versioned focus module depends (as specified in the |Module dependency reference set|). The version of an edition is based on the date on which the edition was released. Many extension providers release their extensions as a versioned edition, using regular and predictable release cycles.
To summarize, a SNOMED CT Extension has the following characteristics:
Uses the same RF2 structure as the SNOMED CT International Edition
Uses a single namespace identifer to globally identify its content
Uses one or more modules to categorize the content into groups
Uses one or more languages to support specific user groups and patient-friendly terms
Depends on the SNOMED CT International Edition
Uses versions (effective times) to identity its content across multiple releases
Now that we have a clear understanding of what SNOMED CT Extensions are, let's take a look at how can we use them in Snow Owl.
Therefore, a SNOMED CT Extension uses one or more modules to categorize the components into meaningful groups (see ).
Therefore, an Extension can have its own language to support patient-friendly terms, local user groups, etc. (see ).
Therefore, a SNOMED CT Extension depends on the SNOMED CT International Edition directly or indirectly through another SNOMED CT Extension (see ).
Therefore, a SNOMED CT Extension can be versioned and have a different release cycle than the SNOMED CT International Edition (see ).
The most common use case to consume a SNOMED CT Release Package is to import it directly into a Terminology Server (like Snow Owl) and make it available as read-only content for both human and machine access (via REST and FHIR APIs).
Since Snow Owl by default comes with a pre-initialized SNOMED CT Code System called SNOMEDCT
, it is just a single call to import the official RF2 package using the SNOMED CT RF2 Import API The import by default creates a Code System Version for each SNOMED CT Effective Date available in the supplied RF2 package. After a successful import the content is immediately available via REST and FHIR APIs.
National Release Centers and other Care Providers provide their own SNOMED CT Edition distribution for third-party consumers in RF2 format. Importing their Edition distribution instead of the International Edition directly into the SNOMEDCT
pre-initialized SNOMED CT Code System with the same SNOMED CT RF2 Import API makes both the International Edition (always included in Edition packages) and the National Extension available for read-only access.
The single edition scenario without much effort provides access to any SNOMED CT Edition directly on the pre-initialized SNOMEDCT
Code System. It is easy to set up and maintain. Because of its flat structure, it is good for distribution and extension consumers. Although it can be used for authoring in certain scenarios, due to the missing distinction between the International Edition and the Extension, it is not the best choice for extension authoring and maintenance.
This scenario can be further extended to support multiple simultaneous Edition releases living on their own dedicated SNOMED CT Code Systems. The Root SNOMEDCT
Code System in this case is empty and only serves the purpose of creating other Code Systems "underneath" it. Each SNOMED CT Code System is then imported into its own dedicated branch forming a star-like branch structure at the end (zero-length MAIN
branch and content branches). This is useful in distribution scenarios, where multiple Extension Code Systems need to be maintained with their own dedicated set of dependencies and there is no time to set up the proper Extension Scenario (see next section). The only drawback of this setup is the potentially high usage of disk space due to the overlap between the various Editions imported into their own Code Systems (since each of them contains the entire International Release).
#
Pros:
Good for maintaining the SNOMED CT International Edition
Good for distribution
Simple to set up and maintain
Cons:
Not recommended for extension authoring and maintenance
Not recommended for multi-extension distribution scenarios
A typical extension scenario is the development of the extension itself. Whether you are starting your extension from scratch or already have a well-developed version that you need to maintain, the first choice you need to make is to identify the dependencies of your SNOMED CT Extension.
If your Extension extends the SNOMED CT International Edition directly, then you need to pick one of the available International Edition versions:
If you are starting from scratch, it is always recommended to select the latest International Release as the starting point of your Extension.
If you have an existing Extension then you probably already know the International Release version your Extension depends on.
When you have identified the version you need to depend on then you need to import that version (or a later release packages that also includes that version in its FULL RF2 package) first into Snow Owl. Make sure that the createVersion
feature of the RF2 import process is enabled, so it will automatically create the versions for each imported RF2 effectiveTime
value.
After you have successfully imported all dependencies into Snow Owl, the next step is to create a Code System that represents your SNOMED CT Extension (see ). When creating the Code System, besides specifying the namespace and optional modules and languages, you need to enter a Code System shortName
, which will serve as the unique identifier of your Extension and select the extensionOf
value, which represents the dependency of the Code System.
After you have successfully created the Code System representing your Extension, you can import any existing content from a most recent release or start from scratch by creating the module concept of your extension.
#
If your Extension needs to extend another Extension and not the International Edition itself, then you need to identify the version you'd like to depend on in that Extension (that indirectly will select the International Edition dependency as well). When you have identified all required versions, then starting from the International Edition recursively traverse back and repeat the RF2 Import and Code System creation steps described in the previous section until you have finally imported your extension. In the end your extension might look like this, depending on how many Extensions you are depending on.
Pros:
Excellent for authoring and maintenance
Good for distribution
Cons:
Harder to set up the initial deployment
RF2 releases tend to have content issues with the International Edition itself or refer to missing content when you try to import them into Snow Owl via the RF2 Import API. For this reason, the recommended way is to always use the most recent Snapshot RF2 release of a SNOMED CT Extension to form its first representation in Snow Owl. That has a high probability of success without any missing component dependency errors during import. If you are having trouble importing an RF2 Release Package into Snow Owl, feel free to raise a question on our page.
Setting up a Snow Owl deployment like this is not an easy task. It requires a thorough understanding of each SNOMED CT Extension you'd like to import and their dependencies as well. However, after the initial setup, the maintenance of your Extension becomes straightforward, thanks to the clear distinction from the International Edition and from its other dependencies. The release process is easier and you can choose to publish your Extension as an extension only release, or as an Edition or both (see ). Additionally, when a new version is available in one of the dependencies, you will be able to upgrade your Extension with the help of automated validation rules and upgrade processes (see ). From the distribution perspective, this scenario shines when you need to maintain multiple Extensions/Editions in a single deployment.
On top of single Edition/Extension distribution and authoring, Snow Owl provides full support for multi-SNOMED CT distribution and authoring even if the Extensions depend on different versions of the SNOMED CT International Edition.
To achieve a deployment like this you need to perform the same initialization steps for each desired SNOMED CT Extension as if it were a single extension scenario (see single extension). Development and maintenance of each managed extension can happen in parallel without affecting one or the other. Each of them can have their own release cycles, maintenance and upgrade schedules, and so on.
After you have initialized your Snow Owl instance with the Extensions you'd like to maintain the next steps are:
Authoring is the process by which content is created in an extension in accordance with a set of authoring principles. These principles ensure the quality of content and referential integrity between content in the extension and content in the International Edition (the principles are set by SNOMED International, can be found here).
During the extension development process authors are:
creating, modifying or inactivating content according to editorial principles and policies
running validation processes to verify the quality and integrity of their Extension
classifying their authored content with an OWL Reasoner to produce its distribution normal form
The authors directly (via the available REST and FHIR APIs) or indirectly (via user interfaces, scripts, etc.) work with the Snow Owl Terminology Server to make the necessary changes for the next planned Extension release.
Authors often require a dedicated editing environment where they can make the necessary changes and let others review the changes they have made, so errors and issues can be corrected before integrating the change with the rest of the Extension. Similarly to how SNOMED CT Extensions are separated from the SNOMED CT International Edition and other dependencies, this can be achieved by using branches.
Branching API - to create and merge branches
Compare API - to compare branches
To let authors make the necessary changes they need, Snow Owl offers the following SNOMED CT component endpoints to work with:
Concept API - to create, edit SNOMED CT Concepts
Description API - to create, edit SNOMED CT Descriptions
Relationship API - to create, edit SNOMED CT Relationships
Reference Set API - to create, edit SNOMED CT Reference Sets
Reference Set Member API - to create, edit SNOMED CT Reference Set Members
To verify quality and integrity of the changes they have made, authors often generate reports and make further fixes according to the received responses. In Snow Owl, reports and rules can be represented with validation queries and scripts.
Validation API - to run validation rules and fetch their reported issues on a per branch basis
Last but not least, authors run an OWL Reasoner to classify their changes and generate the necessary normal form of their Extension. The Classification API provides support for running these reasoner instances and generating the necessary normal form.
When an Extension reaches the end of its current development cycle, it needs to be prepared for release and distribution.
All planned content changes that are still on their dedicated branch either need to be integrated with the main development version or removed from the scope of the next release.
After all development branches have been merged and integrated with the main work-in-progress version, the Extension needs to be prepared for release. This usually involves last minute fixes, running quality checks and validation rules and generating the final necessary normal form of the Extension.
When all necessary steps have been performed successfully, a new Code System Version needs to be created in Snow Owl to represent the latest release. The versioning process will assign the requested effectiveTime
to all unpublished components, update the necessary Metadata reference sets (like the Module Dependency Reference Set) and finally create a version branch to reference this release later.
After a successful release, an RF2 Release Package needs to be generated for downstream consumers of your Extension. Snow Owl can generate this final RF2 Release Packages for the newly released version via the RF2 Export API.
Maintenance of a SNOMED CT Extension is essential to ensure that
it incorporates changes requested by terminology consumers
it remains aligned with the SNOMED CT International Edition
While both of these maintenance related tasks are potentially assigned to one of the upcoming Extension development cycles, there is a clear distinction between the two maintenance tasks.
See additional Extension maintenance related material in the official .
#
Changes requested by your terminology consumers are typically content authoring tasks that you would assign to an Extension authoring team. They usually come with a well-described problem you need to address in the terminology as you would do in the usual development cycle.
See the section on how you can address change requests and incorporate them as regular tasks into the main version of your Extension.
Aligning content to the SNOMED CT International Edition is one of the main responsibilities of an Extension maintainer. However, keeping up with the changes introduced in SNOMED CT International Edition biannually (on January 31st and July 31st) can be an overwhelming task, especially if:
you are under pressure from your terminology consumers to make the requested changes ASAP, especially in mission critical scenarios.
the changes introduced in the International Edition are conflicting with your local changes and/or causing maintenance related issues after the upgrade.
To address SNOMED CT International Edition upgrade tasks in a reliable and reproducible way, Snow Owl offers an upgrade flow for SNOMED CT Extensions.
A Code System upgrade in Snow Owl is a complex workflow with states and steps. The workflow involves a special Upgrade Code System, a series of automated migration processes and validation rules to ensure the quality and reliability of the operation. The upgrade can be done quickly if there were no conflicts between the Extension and the International Edition. However, updates can also be a long-running process spanning over many months when significant structural changes (e.g. in substances, anatomy, or modeling approach) are made in the International Edition.
In Snow Owl, SNOMED CT Extension are linked to their SNOMED CT dependency with the extensionOf
property. This property describes the International Edition and its version that the Extension depends on. For example, the SNOMEDCT/2019-07-31
value specifies that our Extension depends on the 2019-07-31 version of the International Edition.
Extension upgrades can be started when there is a new version available in the Extension/Edition we have selected as our dependency in the extensionOf
property. When fetching a SNOMED CT Code System via the Code System API, Snow Owl will check if there are any upgrades availables and return them in the availableUpdates
array property. If there are no upgrades available the array will be empty.
When the upgrade is started, Snow Owl creates a special <codeSystemShortName>-UP-<newExtensionOf>
(eg. SNOMEDCT-MYEXT-UP-SNOMEDCT-2020-01-31
) Code System to allow authors and the automated processes to migrate the latest development version of the Extension to the new dependency.
Regular daily Extension development tasks still need to be resolved and pushed somewhere in order to continue the development of the Extension, even if an upgrade process is in progress. Each Extension still has an active development version, even if an upgrade is in progress, which can be used to push daily maintenance changes and business as usual tasks.
Changes pushed to the development area will regularly need to be synced with the upgrade until the upgrade completes, so the upgrade team will be able to resolve all remaining conflicts and issues.
Upgrade Checks ensure the quality of the upgrade process and execute certain tasks/checks automatically. An Upgrade Check can be any logic or function to be run during the upgrade. Upgrade Checks can access the underlying upgrade Code System's content and report any issues (validation rules) or fix content automatically (migration rules). For example, a validation rule (like Active relationships must have active source, type, destination references
) can be executed after each change pushed to the upgrade branch to verify whether there is any potentially invalid relationship left to fix or you are ready to go.
Once the upgrade authoring team is done with the necessary changes to align the Extension with the new International Edition and all the checks are completed successfully the upgrade can be completed. Completing the upgrade performs the following steps:
Creates a <codeSystemShortName>-DO-<previousExtensionOf>
Code System to refer to the previous state of the Extension
Changes the current working branch of the Extension Code System to the branch that was used during the upgrade process
Deletes the <codeSystemShortName>-UP-<newExtensionOf>
Code System, which marks the upgrade complete, and the upgrade itself cannot be accessed anymore.
To start an Extension upgrade to a newer International Edition (or to a newer Extension dependency version), you can use the . The only thing that needs to be specified there is the desired new version of the Extension's extensionOf
dependency.
This describes the resources that make up the official Snow Owl® RESTful API.
Custom media types are used in the API to let consumers choose the format of the data they wish to receive. This is done by adding one of the following types to the Accept header when you make a request. Media types are specific to resources, allowing them to change independently and support formats that other resources don’t.
The most basic media types the API supports are:
application/json;charset=UTF-8 (default)
text/plain;charset=UTF-8
text/csv;charset=UTF-8
application/octet-stream (for file downloads)
multipart/form-data (for file uploads)
The generic JSON media type (application/json) is available as well, but we encourage you to explicitly set the accepted content type before sending your request.
All data is sent and received as JSON. Blank fields are omitted instead of being included as null
.
All non-effective time timestamps are returned in ISO 8601 format:
Effective Time values are sent and received in short format:
All POST requests return Location
headers pointing to the created resource instead of including either the identifier or the entire created resource in the response body. These are meant to provide explicit URLs so that proper API clients don’t need to construct URLs on their own. It is highly recommended that API clients use these. Doing so will make future upgrades of the API easier for developers. All URLs are expected to be proper RFC 6570 URI
templates.
Example Location Header:
Requests that return multiple items will be paginated to 50
items by default. You can request further pages with the searchAfter
query parameter.
Where applicable, the expand
query parameter will include nested objects in the response, to avoid having to issue multiple requests to the server.
Expanded properties should be followed by parentheses and separated by commas; any options for the expanded property should be given within the parentheses, including properties to expand. Typical values for parameters are given in the "Implementation Notes" section of each endpoint.
Response:
There are three possible types of client errors on API calls that receive request bodies:
In certain circumstances, Snow Owl might fail to process and respond to a request and responds with a 500 Internal Server Error
.
To troubleshoot these please examine the log files at {SERVER_HOME}/serviceability/logs/log.log
and/or raise an issue on GitHub.
Snow Owl is a revision-based terminology server, where each stored terminology data (concepts, descriptions, etc.) is stored in multiple revisions, across multiple branches. When requesting content from the terminology server, clients are able to specify a path value or expression to select the content they'd like to access and receive. For example, Snow Owl supports importing SNOMED CT content from different sources, allowing eg. multiple national Extensions to co-exist with the base International Edition provided by SNOMED International. Versioned editions can be consulted when non-current representations of concepts need to be accessed. Concept authoring and review can also be done in isolation. Both Java and REST API endpoints require a path
parameter to select the content (or substrate) the user wishes to work with.
The following formats are accepted:
Absolute branch path parameters start with MAIN
and point to a branch in the backing terminology repository. In the following example, all concepts are considered to be part of the substrate that are on branch MAIN/2021-01-31/SNOMEDCT-UK-CL
or any ancestor (ie. MAIN
or MAIN/2021-01-31
), unless they have been modified:
Relative branch paths start with a short name identifying a SNOMED CT code system, and are relative to the code system's working branch. For example, if the working branch of code system SNOMEDCT-UK-CL
is configured to MAIN/2021-01-31/SNOMEDCT-UK-CL
, concepts visible on authoring task #100 can be retrieved using the following request:
An alternative request that uses an absolute path would be the following:
An important difference is that the relative path
parameter tracks the working branch specified in the code system's settings, so requests using relative paths do not need to be adjusted when a code system is upgraded to a more recent International Edition.
The substrate represented by a path range consists of concepts that were created or modified between a starting and ending point, each identified by an absolute branch path (relative paths are not supported). The format of a path range is fromPath...toPath
.
To retrieve concepts authored or edited following version 2020-08-05 of code system SNOMEDCT-UK-CL, the following path expression should be used:
The result set includes the ones appearing or changing between versions 2019-07-31 and 2021-01-31 of the International Edition; if this is not desired, additional constraints can be added to exclude them.
To refer to a branch state at a specific point in time, use the path@timestamp
format. The timestamp is an integer value expressing the number of milliseconds since the UNIX epoch, 1970-01-01 00:00:00 UTC, and corresponds to "wall clock" time, not component effective time. As an example, if the SNOMED CT International version 2021-07-31 is imported on 2021-09-01 13:50:00 UTC, the following request to retrieve concepts will not include any new or changed concepts appearing in this release:
Both absolute and relative paths are supported in the path
part of the expression.
Concept requests using a branch base point reflect the state of the branch at its beginning, before any changes on it were made. The format of a base path is path^
(only absolute paths are supported):
Returned concepts include all additions and modifications made on SNOMEDCT-UK-CL's working branch, up to point where task #101 starts; neither changes committed to the working branch after task #101, nor changes on task #101 itself are reflected in the result set.
Detailed API documentation is coming soon! Until then we recommend to check out the official Swagger documentation available on your Snow Owl instance at /snowowl/admin.
This describes the resources that make up the official Snow Owl® SNOMED CT Terminology API.
Swagger documentation available on your Snow Owl instance at /snowowl/snomedct.
SNOMED CT API endpoints currently have version v3. You have to explicitly set the version of the API via path parameter. For example:
Coming soon!
Coming soon!
Snow Owl provides branching support for terminology repositories. In each repository there is an always existing and UP_TO_DATE
branch called MAIN. The MAIN
branch represents the latest working version of your terminology (similar to a master
branch on GitHub).
You can create your own branches and create/edit/delete components and other resources on them. Branches are identified with their full path, which should always start with MAIN
. For example the branch MAIN/a/b/c/d
represents a branch under the parent MAIN/a/b/c
with name d
.
Later you can decide to either delete the branch or merge the branch back to its parent. To properly merge a branch back into its parent, sometimes it is required to rebase (synchronize) it first with its parent to get the latest changes. This can be decided via the state attribute of the branch, which represents the current state compared to its parent state.
There are five different branch states available:
UP_TO_DATE - the branch is up-to-date with its parent there are no changes neither on the branch or on its parent
FORWARD - the branch has at least one commit while the parent is still unchanged. Merging a branch requires this state, otherwise it will return a HTTP 409 Conflict.
BEHIND - the parent of the branch has at least one commit while the branch is still unchanged. The branch can be safely rebased with its parent.
DIVERGED - both parent and branch have at least one commit. The branch must be rebased first before it can be safely merged back to its parent.
STALE - the branch is no longer in relation with its former parent, and should be deleted.
Snow Owl supports merging of unrelated (STALE) branches. So branch MAIN/a
can be merged into MAIN/b
, there does not have to be a direct parent-child relationship between the two branches.
Response
Response
Input
Response
Response
Input
Response
Input
Response
Response
Response
Two categories make up Snow Owl's Reference Set API:
Reference Sets category to get, search, create and modify reference sets
Reference Set Members category to get, search, create and modify reference set members
Basic operations like create, update, delete are supported for both category.
On top of the basic operations, reference sets and members support actions. Actions have an action property to specify which action to execute, the rest of the JSON properties will be used as body for the Action.
Supported reference set actions are:
sync - synchronize all members of a query type reference set by executing their query and comparing the results with the current members of their referenced target reference set
Supported reference set member actions are:
create - create a reference set member (uses the same body as POST /members)
update - update a reference set member (uses the same body as PUT /members)
delete - delete a reference set member
sync - synchronize a single member by executing the query and comparing the results with the current members of the referenced target reference set
For example the following will sync a query type reference set member's referenced component with the result of the reevaluated member's ESCG query
Members list of a single reference set can be modified by using the following bulk-like update endpoint:
Input
The request body should contain the commitComment property and a request array. The request array must contain actions (see Actions API) that are enabled for the given set of reference set members. Member create actions can omit the referenceSetId parameter, those will use the one defined as path parameter in the URL. For example by using this endpoint you can create, update and delete members of a reference set at once in one single commit.
Comparison for current terminology changes committed to a source or target branch can be conducted by creating a compare resource.
A review identifier can be added to merge requests as an optional property. If the source or target branch state is different from the values captured when creating the review, the merge/rebase attempt will be rejected. This can happen, for example, when additional commits are added to the source or the target branch while a review is in progress; the review resource state becomes STALE in such cases.
Reviews and concept change sets have a limited lifetime. CURRENT reviews are kept for 15 minutes, while review objects in any other states are valid for 5 minutes by default. The values can be changed in the server's configuration file.
Response
Terminology components (and in fact any content) can be read from any point in time by using the special path expression: {branch}@{timestamp}
. To get the state of a SNOMED CT Concept from the previous comparison on the compareBranch
at the returned compareHeadTimestamp
, you can use the following request:
Request
Response
To get the state of the same SNOMED CT Concept but on the base branch, you can use the following request:
Request
Response
Additionally, if required to compute what's changed on the component since the creation of the task, it is possible to get back the base version of the changed component by using another special path expression: {branch}^
.
Request
Response
These characters are not URL safe characters, thus they must be encoded before sending the HTTP request.
This describes the resources that make up the official Snow Owl® CIS API.
Swagger documentation available on your Snow Owl instance at /snowowl/cis.
The endpoints /ValueSet
and /ValueSet/{valueSetId}
and corresponding operations expose all Value Set resources stored (or implict Value Sets if the corresponding Value Set plug-in supports it) in the server. CUD operations are not supported.
All value sets accessible via the /ValueSet
endpoints can be expanded.
For SNOMED CT URIs, implicit value sets are supported:
?fhir_vs - all Concept IDs in the edition/version. If the base URI is http://snomed.info/sct, this means all possible SNOMED CT concepts
?fhir_vs=isa/[sctid] - all concept IDs that are subsumed by the specified Concept.
?fhir_vs=refset - all concept ids that correspond to real references sets defined in the specified SNOMED CT edition
?fhir_vs=refset/[sctid] - all concept IDs in the specified reference set
The following in-parameters are supported:
activeOnly - to return only active codes in the response
filter - to filter the results lexically
displayLanguage - to select the language for the returned display values
includeDesignations - whether to include all designations or not in the returned response
count - to select the number of codes to be returned in the expansion
after - to select codes to be returned after this last page value (cursor)
Codes can be validated against a given Value Set specified by the value set's logical id or canonical URL. In terms of Snow Owl terminology components, codes are validated against:
SNOMED CT Simple Type Reference Sets with Concepts as referenced components.
SNOMED CT Query Type Reference Sets with ECL expressions (each member is a Value Set)
Snow Owl's generic Value Sets
Validation performs the following checks:
The existence of the given Value Set (error if not found)
The existence of the reference in the existing Value Set to the given code (error if not found)
The existence of the given code in the system (error if not found)
Potential version mismatch (_error if the reference points to a version that is different to the code's version)
The status of the given code and reference (warning if code is inactive while reference is active)
The endpoints /ConceptMap
and /ConceptMap/{conceptMapId}
and corresponding operations expose the following types of terminology resources:
SNOMED CT Simple Map Reference Sets with Concepts as referenced components
SNOMED CT Complex Map Reference Sets
SNOMED CT Extended Map Reference Sets
Snow Owl's generic Concept Maps
All concept map accessible via the /ConceptMap
endpoints are considered when retrieving mappings (translations). The translate request's source that designates the source value set cannot be interpreted hence not used. With the exception of SNOMED CT where the standard URI is expected, our proprietary short name or component ids are used to designate the source/target code system.
SNOMED CT:
Simple Map Type Reference Set mappings are considered equivalent in terms of their correlation
The availability and format of target code systems are not guaranteed, there is an ongoing conversation at SNOMED CT International to rectify this.
SNOMED CT concepts represent ideas that are relevant in a clinical setting and have a unique concept identifier (a SNOMED CT identifier or SCTID for short) assigned to them. The terminology covers a wide set of domains and includes concepts that represent parts of the human body, clinical findings, medicinal products and devices, among many others. SCTIDs make it easy to refer unambiguously to the described ideas in eg. an Electronic Health Record or prescription, while SNOMED CT's highly connected nature allows complex analytics to be performed on aggregated data.
Each concept is associated with human-readable descriptions that help users select the SCTID appropriate for their use case, as well as relationships that form links between other concepts in the terminology, further clarifying their intended meaning. The API for manipulating the latter two types of components are covered in sections Descriptions and Relationships, respectively.
The three component types mentioned above (also called core components) have a distinct set of attributes which together form the concept's definition. As an example, each concept includes an attribute (the definition status) which states whether the definition is sufficiently defined (and so can be computationally processed), or relies on a (human) reader to come up with the correct meaning based on the associated descriptions.
Terminology services exposed by Snow Owl allows clients to create, retrieve, modify or remove concepts from a SNOMED CT code system (concepts that are considered to be already published to consumers can only be removed with an administrative operation). Concepts can be retrieved by SCTID or description search terms; results can be further constrained via Expression Constraint Language (ECL for short) expressions.
A concept resource without any expanded properties looks like the following:
The resource includes all RF2 properties that are defined in SNOMED International's Release File Specification🌎:
id
effectiveTime
active
moduleId
definitionStatusId
It also contains the following supplementary information:
parentIds
, ancestorIds
These arrays hold a set of SCTIDs representing the concept's direct and indirect ancestors in the inferred taxonomy. The (direct) parents array contains all destinationId
s from active and inferred IS A relationships where the sourceId
matches this concept's SCTID, while the ancestor array contains all SCTIDs taken from the parent and ancestor array of direct parents. The arrays are sorted by SCTID. A value of -1
means that the concept is a root concept that does not have any concepts defined as its parent. Typically, this only applies to 138875005|Snomed CT Concept|
in SNOMED CT content.
See the following example response for a concept placed deeper in the tree:
Compare the output with a rendering from a user interface, where the concept appears in two different places after exploring alternative routes in the hierarchy. Parents are marked with blue, while ancestors are highlighted with orange:
statedParentIds
, statedAncestorIds
Same as the above, but for the stated taxonomy view.
released
A boolean value indicating whether this concept was part of at least one SNOMED CT release. New concepts start with a value of false
, which is set to true
as part of the code system versioning process. Released concepts can only be deleted by an administrator.
iconId
A descriptive key for the concept's icon. The icon identifier typically corresponds to the lowercase, underscore-separated form of the hierarchy tag🌎 contained in each concept's Fully Specified Name (or FSN for short). The following keys are currently expected to appear in responses (subject to change):
administration_method
, assessment_scale
, attribute
, basic_dose_form
, body_structure
, cell
, cell_structure
, clinical_drug
, disorder
, disposition
, dose_form
, environment
, environment_location
, ethnic_group
, event
, finding
, geographic_location
, inactive_concept
, intended_site
, life_style
, link_assertion
, linkage_concept
, medicinal_product
, medicinal_product_form
, metadata
, morphologic_abnormality
, namespace_concept
, navigational_concept
, observable_entity
, occupation
, organism
, owl_metadata_concept
, person
, physical_force
, physical_object
, procedure
, product
, product_name
, qualifier_value
, racial_group
, record_artifact
, regime_therapy
, release_characteristic
, religion_philosophy
, role
, situation
, snomed_rt_ctv3
, social_concept
, special_concept
, specimen
, staging_scale
, state_of_matter
, substance
, supplier
, transformation
, tumor_staging
, unit_of_presentation
In the metadata hierarchy, the use of a hierarchy tag alone would not distinguish concepts finely enough, as lots of them will have eg. "foundation metadata concept" set as their tag. In these cases, concept identifiers may be used as the icon identifier.
subclassDefinitionStatus
Currently unsupported. Indicates whether a parent concept's direct descendants form a disjoint union🌎 in OWL 2 terms; when set to DISJOINT_SUBCLASSES
, child concepts are assumed to be pairwise disjoint and together cover all possible cases of the parent concept.
The default value is NON_DISJOINT_SUBCLASSES
where no such assumption is made.
Core component information related to the current concept can be attached to the response by using the expand
query parameter, allowing clients to retrieve more data in a single roundtrip. Property expansion runs the necessary requests internally, and attaches results to the original response object.
Expand options are expected to appear in the form of propertyName1(option1: value1, option2: value2, expand(...)), propertyName2()
where:
propertyNameN
stands for the property to expand;
optionN: valueN
are key-value pairs providing additional filtering for the expanded property;
optionally, expand
s can be nested, and the options will apply to the components returned under the parent property;
when no expand options are given, an empty set of ()
parentheses need to be added after the property name.
Supported expandable property names are:
referenceSet()
Expands reference set metadata and content, available on identifier concepts🌎.
If a corresponding reference set was already created for an identifier concept (a subtype of 900000000000455006|Reference set
), information about the reference set will appear in the response:
Note that the response object for property referenceSet
can also be retrieved directly using the Reference Sets API.
To retrieve reference set members along with the reference set in a single request, use a nested expand
property named members
:
Reference set members can also be fetched via the SNOMED CT Reference Set Member API.
preferredDescriptions()
Expands descriptions with preferred acceptability.
Returns all active descriptions that have at least one active language reference set member with an acceptabilityId of 900000000000548007|Preferred|
, in compact form, along with the concept. Preferred descriptions are frequently used on UIs when a display label is required for a concept.
This information is also returned when expand options pt()
or fsn()
(described later) are present.
semanticTags()
Returns hierarchy tags extracted from FSNs.
An array containing the hierarchy tags from all Fully Specified Name-typed descriptions of the concept is added as an expanded property if this option is present:
inactivationProperties()
Collects information from concept inactivation indicator and historical association reference set members referencing this concept.
Members of 900000000000489007|Concept inactivation indicator attribute value reference set|
and subtypes of 900000000000522004 |Historical association reference set|
hold information about a reason a concept is being retired in a release, as well as suggest potential replacement(s) for future use.
The concept stating the reason for inactivation is placed under inactivationProperties.inactivationIndicator.id
(a short-hand property exists without an extra nesting, named inactivationProperties.inactivationIndicatorId
). It is expected that only a single active inactivation indicator exists for an inactive concept.
Historical associations are returned under the property inactivationProperties.associationTargets
as an array of objects. Each object includes the identifier of the historical association reference set and the target component identifier, in the same manner as described above – as an object with a single id
property and as a string value.
While most object values where a single id
key is present indicate that the property can be expanded to a full resource representation, this is currently not supported for inactivation properties; an expand option of inactivationProperties(expand(inactivationIndicator()))
will not retrieve additional data for the indicator concept.
members()
Expands reference set members referencing this concept.
Note that this is different from reference set member expansion on a reference set, ie. referenceSet(expand(members()))
, as this option will return reference set members where the referencedComponentId
property matches the concept SCTID, from multiple reference sets (if permitted by other expand options). Inactivation and historical association members can also be returned here, in their entirety (as opposed to the summarized form described in inactivationProperties()
above).
Reference set members can also be fetched in a "standalone" fashion via the SNOMED CT Reference Set Member API.
Compare the output with the one returned when inactivation indicators were expanded. The last two reference set members correspond to the historical association and the inactivation reason, respectively:
The following expand options are supported within members(...)
:
active: true | false
Controls whether only active or inactive reference set members should be returned.
refSetType: "{type}" | [ "{type}"(,"{type}")* ]
The reference set type(s) as a string, to be included in the expanded output; when multiple types are accepted, values must be enclosed in square brackets and separated by a comma.
expand(...)
Allows nested expansion of reference set member properties.
Allowed reference set type constants are (these are described in the Reference Set Types🌎 section of SNOMED International's "Reference Sets Practical Guide" and the Reference Set Types🌎 section of "Release File Specification" in more detail):
SIMPLE
- simple type
SIMPLE_MAP
- simple map type
LANGUAGE
- language type
ATTRIBUTE_VALUE
- attribute-value type
QUERY
- query specification type
COMPLEX_MAP
- complex map type
DESCRIPTION_TYPE
- description type
CONCRETE_DATA_TYPE
- concrete data type (vendor extension for representing concrete values in Snow Owl)
ASSOCIATION
- association type
MODULE_DEPENDENCY
- module dependency type
EXTENDED_MAP
- extended map type
SIMPLE_MAP_WITH_DESCRIPTION
- simple map type with map target description (vendor extension for storing a descriptive label with map targets, suitable for display)
OWL_AXIOM
- OWL axiom type
OWL_ONTOLOGY
- OWL ontology declaration type
MRCM_DOMAIN
- MRCM domain type
MRCM_ATTRIBUTE_DOMAIN
- MRCM attribute domain type
MRCM_ATTRIBUTE_RANGE
- MRCM attribute range type
MRCM_MODULE_SCOPE
- MRCM module scope type
ANNOTATION
- annotation type
COMPLEX_BLOCK_MAP
- complex map with map block type (added for national extension support)
See the following example for combining reference set member status filtering and reference set type restriction:
module()
Expands the concept's module identified by property moduleId
, and places it under the property module
. As the returned resource is a concept itself, property expansion can apply to modules as well by using a nested expand()
option.
Property module
does not appear in compact form (with a single id
key) in the standard representation.
definitionStatus()
Expands the definition status concept identified by the property definitionStatusId
, and places it under the property definitionStatus
. When this property is not expanded, a smaller placeholder object with a single id
property is returned in the response. Nested expand()
options work the same way as in the case of module()
.
pt()
and fsn()
Expands the Preferred Term🌎 (PT for short) and the Fully Specified Name🌎 (FSN for short) of the concept, respectively.
These descriptions are language context-dependent; the use of certain descriptions can be preferred in one dialect and acceptable or discouraged in others. The final output is controlled by the Accept-Language🌎 request header, which clients can use to supply a list of locales in order of preference.
In addition to the standard locales like en-US
, Snow Owl uses an extension to allow referring to language reference sets by identifier, in the form of {language code}-x-{language reference set ID}
. "Traditional" language tags are resolved to language reference set IDs as part of executing the request by consulting the code system settings:
An example response pair demonstrating cases where the PT is different in certain dialects:
descriptions()
Expands all descriptions associated with the concept, and adds them to a collection resource (that includes an element limit and a total hit count) under the property descriptions
. These can also be retrieved separately by the use of the SNOMED CT Description API.
The collection resource's limit
and total
values are set to the same value (the number of descriptions returned for the concept) because a description fetch limit can not be set via a property expand option.
The following expand options are supported within descriptions(...)
:
active: true | false
Controls whether only active or inactive descriptions should be included in the response. (If both are required, do not set any value for this expand property.)
typeId: "{expression}"
An ECL expression that restricts the typeId
property of each returned description. The simplest expression is a single SCTID, eg. when this option has a value of "900000000000013009"
, only Synonyms🌎 will be expanded.
sort: "{field}(:{asc | desc})?"(, "{field}(:{asc | desc})")*
Items in the collection resource are sorted based on the sort configuration given in this option. A single, comma-separated string value is expected; field names and sort order must be separated by a colon (:
) character. When no sort order is given, ascending order (asc
) is assumed.
expand(...)
Allows nested expansion of description properties.
relationships()
Retrieves all "outbound" relationships, where the sourceId
property matches the SCTID of the concept(s), adding them to a property named relationships
as a collection resource object. The same set of relationships can also be retrieved in standalone form via Snow Owl's SNOMED CT Relationship API.
limit
and total
values on relationships
are set to the same value (the number of relationships returned for the concept) because a relationship fetch limit can not be set via an expand option.
The following expand options are supported within relationships(...)
:
active: true | false
Controls whether only active or inactive relationships should be included in the response. (If both are required, do not set any value for this expand property.)
characteristicTypeId: "{expression}"
An ECL expression that restricts the characteristicTypeId
property of each returned relationship. As an example, when this value is set to "<<900000000000006009"
, both stated and inferred relationships will be returned, as their characteristic type concepts are descendants of 900000000000006009|Defining relationship|
.
typeId: "{expression}"
An ECL expression that restricts the typeId
property of each returned relationship.
destinationId: "{expression}"
An ECL expression that restricts the destinationId
property of each returned relationship.
sort: "{field}(:{asc | desc})?"(, "{field}(:{asc | desc})")*
Items in the collection resource are sorted based on the sort configuration given in this option. A single, comma-separated string value is expected; field names and sort order must be separated by a colon (:
) character. When no sort order is given, ascending order (asc
) is assumed.
expand(...)
Allows nested expansion of relationship properties.
inboundRelationships()
Retrieves all "inbound" relationships, where the destinationId
property matches the SCTID of the concept(s), adding them to property inboundRelationships
.
limit
and total
values on inboundRelationships
are set to the same value (the number of inbound relationships returned for the concept), but differently from options above, a fetch limit is applied when it is specified.
The same set of options are supported within inboundRelationships
as in relationships
(see above), with three important differences:
destinationId: "{expression}"
This option is not supported on inboundRelationships
; all destination IDs match the concept's SCTID.
sourceId: "{expression}"
An ECL expression that restricts the sourceId
property of each returned relationship.
limit: {limit}
Limits the maximum number of inbound relationships to be returned. Not recommended for use when the expand option applies to a collection of concepts, not just a single one, as the limit is not applied individually for each concept.
descendants()
/ statedDescendants()
Depending on which direct
setting is used, retrieves all concepts whose [stated]parentIds
and/or [stated]AncestorIds
array contains this concept's SCTID. Results are added to property descendants
or statedDescendants
, based on the option name used.
Only active concepts are returned, as these are expected to have active "IS A" relationships or OWL axioms that describe the relative position of the concept within the terminology graph.
The following options are available:
direct: true | false
(required)
Controls whether only direct descendants should be collected or a transitive closure of concept subtypes.
When set to true
, property [stated]parentIds
will be searched only, otherwise both [stated]parentIds
and [stated]AncestorIds
are used. The presence or absence of the "stated" prefix in the search field depends on the option name.
limit: 0
Applicable only when a single concept's properties are expanded. Collects the number of descendants in an efficient manner, and sets the total
property of the returned collection resource without including any concepts in it. Not used when a collection of concepts are expanded in a single request, or any other value is given.
expand(...)
Allows nested expansion of concept properties on each collected descendant.
ancestors()
/ statedAncestors()
Depending on which direct
setting is used, retrieves all concepts that appear in this concept's [stated]parentIds
and/or [stated]AncestorIds
array. Results are added to property ancestors
or statedAncestors
, based on the option name used.
The following options are available:
direct: true | false
(required)
Controls whether only direct ancestors should be collected or a transitive closure of concept supertypes.
When set to true
, property [stated]parentIds
will be used only for concept retrieval, otherwise the union of [stated]parentIds
and [stated]AncestorIds
are collected (the special placeholder value "-1" is ignored). The presence or absence of the "stated" prefix in the search field depends on the option name.
limit: 0
Collects the number of ancestors in an efficient manner, and sets the total
property of the returned collection resource without including any concepts in it. Not used when any other value is given (however, this property expansion supports cases where multiple concepts' ancestors need to be returned).
expand(...)
Allows nested expansion of concept properties on each collected ancestor.
A GET request that includes a concept identifier as its last path parameter will return information about the concept in question:
expand={options}
Concept properties that should be returned along with the original request, as part of the concept resource. See available options in section Property expansion above.
field={field1}[,{fieldN}]*
Restricts the set of fields returned from the index. Results in a smaller response object when only specific information is needed.
Supported names for field selection are the following:
active
activeMemberOf
ancestors
- controls the appearance of ancestorIds
as well
definitionStatusId
doi
effectiveTime
exhaustive
iconId
id
- always included in the response, even when not present as a field
parameter
mapTargetComponentType
memberOf
moduleId
namespace
parents
- controls the appearance of parentIds
as well
preferredDescriptions
refSetType
referencedComponentType
released
score
semanticTags
statedAncestors
- controls the appearance of statedAncestorIds
as well
statedParents
- controls the appearance of statedParentIds
as well
and created
- these fields are associated with revision control, and even though they are listed as supported fields, they do not appear in the response even when explicitly requested.revised
Specifying any other field name results in a 400 Bad Request
response:
Fields with a value of null
do not appear in the response, even if they are selected for inclusion.
Accept-Language: {language-range}[;q={weight}](, {language-range}[;q={weight}])*
Controls the logic behind Preferred Term and Fully Specified Name selection for the concept. See the documentation for expand options pt() and fsn() for details.
Specifying an unknown language or dialect results in a 400 Bad Request
response:
A GET request that ends with concepts
as its last path parameter will search for concepts matching all of the constraints supplied as query parameters. By default (when no query parameter is added) it returns all concepts.
The response consists of a collection of concept resources, a searchAfter
key (described in section "Query parameters" below), the limit used when computing response items and the total hit count:
definitionStatus={eclExpression} | {id1}[,{idN}]*
An ECL expression or enumerated list that describes the allowed set of SCTIDs that must appear in matching concepts' definitionStatusId
property. Since there are only two values used, 900000000000074008|Primitive|
and 900000000000073002|Defined|
for primitive and fully defined concepts, respectively, a single SCTID is usually entered here.
ecl={eclExpression}
Restricts the returned set of concepts to those that match the specified ECL expression. The query parameter can be used on its own for evaluation of expressions, or in combination with other query parameters. Expressions conforming to the short form of ECL 1.5 syntax are accepted. The expression is evaluated over the inferred view🌎, based on the currently persisted inferred relationships.
As ECL syntax uses special symbols, query parameters should be encoded to URL-safe characters. The examples in this section are using the cleartext form for better readability.
statedEcl={eclExpression}
Same as ecl
, but the input expression is evaluated over the stated view🌎 by using stated relationships (if present) and OWL axioms for evaluation.
semanticTag={tag1}[,{tagN}]*
Filters concepts by a comma-separated list of allowed hierarchy tags. Matching concepts can have any of the supplied tags present (at least one) on their Fully Specified Names.
term={searchTerm}
Matching concepts must have an active description whose term matches the string specified here. The search is executed in "smart match" mode; the following examples show which search expresssions match which description terms:
descriptionType={eclExpression} | {id1}[,{idN}]*
Restricts the result set by description type; matches must have at least one active description whose typeId
property is included in the evaluated ECL result set or SCTID list. It is typically used in combination with term
(see above) to control which type of descriptions should be matched by term.
parent={id1}[,{idN}]*
statedParent={id1}[,{idN}]*
ancestor={id1}[,{idN}]*
statedAncestor={id1}[,{idN}]*
Filters concepts by hierarchy. All four query parameters accept a comma-separated list of SCTIDs; the result set will contain direct descendants of the specified values in the case of parent
and statedParent
, and a transitive closure of descendants for ancestor
and statedAncestor
(including direct children). Parameters starting with stated...
will use the stated IS A hierarchy for computations.
doi=true | false
Controls whether relevance-based sorting should take Degree of Interest (DOI for short) into account. When enabled, concepts that are used frequently in a clinical environment are favored over concepts with a lower likelihood of use.
namespace={namespaceIdentifier}
namespaceConceptId={id1}[,{idN}]*
The SCTID of matching concepts must have the specified 7-digit namespace identifier🌎, eg. 1000154
. When matching by namespace concept ID, a comma-separated list of SCTIDs are expected, and the associated 7-digit identifier will be extracted from the active FSNs of each concept entered here.
isActiveMemberOf={eclExpression} | {id1}[,{idN}]*
This filter accepts either a single ECL expression, or a comma-separated list of reference set SCTIDs. For each matching concept at least one active reference set member must exist where the referenceComponentId
points to the concept and the referenceSetId
property is listed in the filter, or is a member of the evaluated ECL expression's result set.
effectiveTime={yyyyMMdd} | Unpublished
Filters concepts by effective time. The query parameter accepts a single effective time in yyyyMMdd
(short) format, or the literal Unpublished
when searching for concepts that have been modified since they were last published as part of a code system version.
Note that only the concept's effective time is taken into account, not any of its related core components (descriptions, relationships) or reference set members. If the concept's status, definition status or module did not change since the last release, its effective time will not change either.
When searching for Unpublished
concepts, the effectiveTime
property will not appear on returned concept resources, as the value is null
for all unpublished components.
active=true | false
Filters concepts by status. When set to true
, only active concepts are added to the resulting collection, while a value of false
collects inactive concepts only. (If both active and inactive concepts should be returned, do not add this parameter to the query.)
module={eclExpression} | {id1}[,{idN}]*
Filters concepts by moduleId
. The query parameter accepts either a single ECL expression, or a comma-separated list of module SCTIDs; concepts must have a moduleId
property that is included in the ID list or the evaluated ECL result.
id={id1}[,{idN}]*
Filters concepts by SCTID. The parameter accepts a comma-separated list of IDs; matching concepts must have an id
property that matches any of the specified identifiers.
sort: "{field}(:{asc | desc})?"(, "{field}(:{asc | desc})")*
Sorts returned concept resources based on the sort configuration given in this parameter. Field names and sort order must be separated by a colon (:
) character. When no sort order is given, ascending order (asc
) is assumed.
Field names supported for sorting are the same that are used for field selection; please see above for the complete list.
The default behavior is to sort results by id
, in ascending order. SCTIDs are sorted lexicographically, not as numbers; this means that eg. 10683591000119104
will appear before 10724008
, as their first two digits are the same, but the third digit is smaller in the former identifier.
limit={limit}
Controls the maximum number of items that should be returned in the collection. When not specified, the default limit is 50
items.
searchAfter={searchAfter}
Supports keyset pagination, ie. retrieving the next page of items based on the response for the current page. To use, set limit
to the number of items expected on a single page, then run the first search request without setting a searchAfter
key. The returned response will include the value to be inserted into the next request:
The process can be repeated until the items
array turns up empty, indicating that there are no more pages to return.
searchAfter
keys should be considered opaque; they can not be constructed to jump to an arbitrary point in the enumeration. Keyset pagination also doesn't handle cases gracefully where eg. concepts with "smaller" SCTIDs are inserted while pages are retrieved from the server. If a consistent result set is expected, a point-in-time path parameter should be used in consecutive search requests.
expand={options}
Concept properties that should be returned along with the original request, as part of the concept resource. See available options in section Property expansion above.
field={field1}[,{fieldN}]*
Restricts the set of fields returned from the index. Results in a smaller response object when only specific information is needed. See above for the list of supported field names.
Accept-Language: {language-range}[;q={weight}](, {language-range}[;q={weight}])*
Controls the logic behind Preferred Term and Fully Specified Name selection for returned concepts. See the documentation for expand options pt() and fsn() for details.
POST requests submitted to concepts/search
perform the same search operation as described for the GET request above, but each query parameter is replaced by a property in the JSON request body:
Accept-Language: {language-range}[;q={weight}](, {language-range}[;q={weight}])*
Controls the logic behind Preferred Term and Fully Specified Name selection for returned concepts. See the documentation for expand options pt() and fsn() for details.
POST requests submitted to concepts
create a new concept with the specified parameters, then commit the result to the terminology repository.
The resource path typically consists of a single code system identifier for these requests, indicating that changes should go directly to the working branch of the code system, or a direct child of the working branch for isolating a set of changes that can be reviewed and merged in a single request.
The request body needs to conform to the following requirements:
include at least one Fully Specified Name (FSN)
include at least one preferred synonym (Preferred Term, PT)
The SCTID of created components can be specified in two ways:
Explicitly by setting the id
property on the component object; the request fails when an existing component in the repository already has the same SCTID assigned to it;
Allowing the server to generate an identifier by leaving id
unset and populating namespaceId
with the expected namespace identifier, eg. "1000154"
. Requests using namespaceId
should not fail due to an SCTID collision, as generated identifiers are checked for uniqueness.
When a namespaceId
is set on the concept level, descriptions and relationships will use this value by default, so in this case neither id
nor namespaceId
needs to be set on them. The same holds true for moduleId
– the concept's module identifier is applied to all related descriptions, relationships and reference set members in the request, unless it is set to a different value on the component object.
Please see the example below for required properties. (Note that it is non-executable in its current form, as the OWL axiom reference set member can not be created without knowing the concept's SCTID in advance.)
A successful commit will result in a 201 Created
response; the response header Location
can be used to extract the generated concept identifier. Validation errors in the request body cause a 400 Bad Request
response.
X-Author: {author_id}
Changes the author recorded in the commit message from the authenticated user (default) to the specified user.
PUT requests to locations that identify a concept resource (same as when retrieving concept content) will update the concept. Following a successful commit, the state of the concept on the branch should match the state received in the request body.
The following properties can be updated on any component. If they are not included in the request, the corresponding component property remains unchanged.
moduleId
: string
active
: boolean
effectiveTime
: string (in YYYYmmdd
, "short" format)
When inactivating🌎 a concept, an object named inactivationProperties
can be added that can point to possible replacement concepts and/or specify the reason for inactivation:
Specifying an empty string for inactivationIndicatorId
will remove an existing indicator, while an empty array will delete historical association reference set members for the concept. This is handled automatically when the concept is re-activated, so inactivationProperties
can be omitted from such requests entirely:
Properties that can be updated on the concept itself are:
definitionStatusId
: string
subclassDefinitionStatus
: "DISJOINT_SUBCLASSES" | "NON_DISJOINT_SUBCLASSES"
In addition to the above, core components and reference set members related to the concept in question can be updated in a single request by including any of the following properties:
descriptions
relationships
members
Each of the above can hold a collection resource of the respective component resource type. These resources are described in detail in sections Descriptions, Relationships and Reference set members, respectively.
If a collection resource property is not included in the update request, the corresponding component type is unchanged. An empty array attempts to delete all existing related components. Otherwise, the components included in the collection are compared by SCTID/UUID to existing components, and it is decided whether:
a new component should be created (if the identifier did not appear previously in the terminology store)
an existing component should be updated (if the identifier existed previously in the terminology store)
an existing component should be deleted (if the identifier does not exist in the request, but existed previously in the terminology store)
Successful updates return 204 No Content
from the server. Updates that attempt to modify the state of a missing or deleted concept result in a 404 Not Found
response.
force=true | false
Specifies whether updating the effective time of the concept should be allowed. The default value is false
; in such cases, supplying an effective time property for the update is disallowed. The component's effective time after an update is computed automatically at all times, when the force
property is set to true
, this can be overridden externally.
X-Author: {author_id}
Changes the author recorded in the commit message from the authenticated user (default) to the specified user.
DELETE requests sent to a URI where the last path parameter is an existing concept ID will remove the concept and all of its associated components (descriptions, relationships, reference set members referring to the concept) from the terminology repository.
Deletes are acknowledged with a 204 No Content
response on success. Deletion can be verified by trying to retrieve concept information from the same resource path – a 404 Not Found
should be returned in this case.
Note that resource branches maintain content in isolation, and so deleting a concept on eg. a task branch will not remove the concept from the code system's working branch, until work on the task branch is approved and merged into mainline.
force=true | false
Specifies whether deletion of the concept should be allowed, if it has components that were already part of an RF2 release (or code system version). This is indicated by the released
property on each component.
The default value is false
; with the option disabled, attempting to delete a released component results in a 409 Conflict
response:
Only administrators should set this parameter to true
. It is advised to delete redundant or erroneous components before they are put in circulation as part of a SNOMED CT RF2 release. In other cases, inactivation should be preferred over removal.
X-Author: {author_id}
Changes the author recorded in the commit message from the authenticated user (default) to the specified user.
Code Systems maintained within Snow Owl are exposed (read-only) via the endpoints /CodeSystem
and /CodeSystem/{codeSystemId}
. Supported concept properties are handled and returned if requested. The currently exposed code systems are:
Snow Owl OSS:
SNOMED CT
Snow Owl:
ATC
ICD-10 (and extensions)
LOINC
OPCS
Local Code Systems
Any other terminology
All standard and default SNOMED CT properties are supported, including the relationship type and concrete value properties. In addition to the FHIR SNOMED CT properties, Snow Owl can return the effective time property, with the URI http://snomed.info/field/Concept.effectiveTime
.
Both GET as well as POST HTTP methods are supported. Concepts are queried based on code
, version
, system
or Coding
. Designations are included as part of the response as well as supported concept properties when requested. No date
parameter is supported.
Example for looking up properties (inactive and method) of the latest version of a SNOMED CT procedure by method code:
For SNOMED CT, all common and SNOMED CT properties are supported, including all active relationship types.
Both GET as well as POST HTTP methods are supported for all exposed terminologies. Example for validating a SNOMED CT code:
Both GET as well as POST HTTP methods are supported. Subsumption testing is supported for all terminologies, including SNOMED CT.
Example for SNOMED CT (version 2021-07-31):
Snow Owl uses a single data source, an Elasticsearch cluster (either embedded or external). To backup and restore the data, we highly recommend the official feature from Elasticsearch. On top of that API, we highly recommend using tools, like Curator to ease the lifecycle management of your Elasticsearch cluster and your indices. See Curator .
Reminder: for production environment we highly recommend using an external Elasticsearch cluster as opposed to the embedded one. External Elasticsearch clusters are more customizable and can be configured to use other snapshot repository types, like Amazon S3, HDFS, etc.
Below you can find a very simple guide on how to configure the backup and restore process for your Snow Owl Terminology Server using Curator.
Fast Healthcare Interoperability Resources (FHIR) specifies resources, operations, coded data types and terminologies that are used for representing and communicating coded, structured data in the FHIR core specification within its Terminology Module.
Snow Owl's pluggable and extensible architecture allows modular development of the FHIR API both in terms of the supported functionality as well as the exposed terminologies. Additionally, Snow Owl's revision-based model allows the concurrent management of multiple versions.
The Snow Owl terminology server's FHIR API release includes support for the following resources:
Versions in Snow Owl are represented as individual FHIR Resources when accessed via the FHIR API endpoints. If there are no versions present for a given resource, only the latest development version is returned as available FHIR Resource. When accessing a terminology resource via the FHIR API, but without specifying an exact version tag, then the system will always assume and return the latest development version, including not yet published changes. It is recommended to always query a specific version of any terminology content to get consistent results, especially when the same terminology server instance is being used for both authoring and distribution.
Resource representations can be filtered by the following supported official FHIR payload filters:
_summary - to return a predefined set of properties and their values
_elements - to return only the mandatory and the specified list of properties and nothing else
The supported search parameters:
_id - to filter FHIR resources by their logical identifier
name - to filter FHIR resources by their name (which in Snow Owl equals to the logical identifier)
title - to filter FHIR resources by their title property lexically (Snow Owl by default uses exact, phrase and prefix matching during its lexical search activities)
url - to filter FHIR resources by their assigned url
value
system - to filter FHIR resources by their assigned system
value (which in Snow Owl always matches the url
value)
version - to filter FHIR resource by their version
property value
_lastUpdated - exposed but not supported yet
Sorting supported via standard FHIR sort parameters, while paging is supported with a new after
parameter (using count
as page size). Offset
+ count
based traditional paging is not supported.
Globally unique logical URIs that represent a terminology resource. For code systems these are:
Snow Owl's Local Code Systems (LCS) identified by the URI that is based on the Organization Link property stored within Snow Owl's Terminology Registry and the Short Name of the LCS e.g.: https://b2i.sg/MyLocalCodeSystem.
The logical id field of each resource is assigned by Snow Owl and is unique within it. Once it has been assigned, the id never changes. For this logical identifier, Snow Owl follows the pattern:
For example to identify a particular SNOMED CT Edition with its version 2021-03-01:
SNOMEDCT-US/2021-03-01
For example to identify a particular LOINC code system with the version tag v2.64:
Currently only JSON format is supported with UTF-8 encoding and content type of Content-Type = application/fhir+json;charset=utf-8
. In case of any errors during the processing the API responds with an OperationOutCome
within the response body using one of the HTTP status codes:
Snow Owl exposes a comprehensive REST API to support areas such as:
Syndication - content provisioning between servers or between the Snow Owl Authoring platform and servers
Administration (repository and revision control management)
Auditing
SNOMED CT specific browsing and authoring API
Please refer to the official Curator on how to install it on various operating systems.
In order to create backups for Snow Owl, you need a repository in your Elasticsearch cluster.
To create a repository (assuming shared file system repository, fs
), execute the following command:
Elasticsearch requires that the specified /path/to/shared/mount
is whitelisted in the path.repo
configuration setting in the elasticsearch.yml
configuration file. See section of the Elasticsearch reference for details.
Curator requires a single configuration file to be specified when running it. If you are using a default Elasticsearch cluster with default configurations then the default Curator recommended file should be sufficient. Any configuration changes you have made to your Elasticsearch cluster needs to be changed here as well in this config file so Curator can access your cluster without any issues.
Example curator.yml
:
Curator is using action YML files to perform a set of actions sequentially. See the available steps here: https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/actions.html
A Snapshot Action that can be used to backup the content from a Snow Owl Terminology Server.
Example snowowl_snapshot.yml
file:
To execute a Snapshot action manually, you can use the following command:
A Restore Action that can be used to restore the latest snapshot (aka backup) to the Snow Owl Terminology Server.
Example snowowl_restore.yml
file:
To execute a Restore action manually, you can use the following command:
An example crontab entry that initiates a daily backup at 03:00, and captures Curator's output to /var/log/backup.log
(both standard output and standard error) would look like this:
For SNOMED CT, Snow Owl's FHIR implementation follows the .
For ICD-10, Snow Owl's FHIR implementation follows the .
To schedule automated backups, you can use on Unix-style operating systems to automate the job. The back up interval depends on your use case and how you are accessing the data. If you have a write-heavy scenario, we recommend a hourly backup interval, otherwise some value between hourly - daily is preferable.
SNOMED CT
http://snomed.info/sct
LCS
Defined when the resource was created
Value Set
Defined when the resource was created
Concept Map
Defined when the resource was created
ATC
http://www.whocc.no/atc
ICD-10
http://hl7.org/fhir/sid/icd-10
LOINC
http://loinc.org
200
OK
400
Bad Request
401
Unauthorized
403
Forbidden
404
Not Found
500
Internal Error
The following major differences, features and topics are worth mentioning when comparing features present in Snow Owl 7 and 8 and migrating an existing 7.x deployment to Snow Owl 8.x.
NOTE: It is highly recommended to keep the previous Snow Owl 7 deployment up and running until you have the data and all connected services migrated to the new version successfully. The new Snow Owl 8 system should get its own dedicated machine and deployment environment. Rolling back to the previous state should be available and must be executed when the upgrade cannot be performed successfully.
Due to resource and access management schema changes the old content present in a 7.x index cannot be used by a Snow Owl 8 installation. To migrate an existing dataset to the new version, perform an export in the old system and use the exported files to import the content back into the new Snow Owl 8 version.
The following configuration settings have been changed:
Most of the snomed
configuration keys have been added to runtime settings under the CodeSystem.settings
property. If you have been using any of these configuration values, please raise a ticket here and we will help you migrate your current installation to the new version.