arrow-left

Only this pageAll pages
gitbookPowered by GitBook
1 of 69

7.x

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Export SNOMED CT

Getting started

Snow Owl® is a highly scalable, open source terminology server and collaborative authoring platform. It allows you to store, search and author high volumes of terminology artifacts quickly and efficiently.

Here are a few use-cases that Snow Owl could be used for:

  • You work in the healthcare industry and are interested in using a terminology server for browsing, accessing and distributing components of various terminologies and classifications to third-party consumers. In this case, you can use Snow Owl to load the necessary terminologies and access them via FHIR and proprietary APIs.

  • You are responsible for maintaining and publishing new versions of a particular terminology. In this case, you can use Snow Owl to collaboratively access and author the terminology content and at the end of your release schedule publish it with confidence and zero errors.

  • You have an Electronic Health Record system and would like to capture, maintain and query clinical information in a structured and standardized manner. Your Snow Owl terminology server can integrate with your EHR server via standard APIs to provide the necessary access for both terminology binding and data processing and analytics.

In this tutorial, you will be guided through the process of getting Snow Owl up and running, taking a peek inside it, and performing basic operations like importing SNOMED CT RF2 content, searching, and modifying your data. At the end of this tutorial, you should have a good idea of what Snow Owl is, how it works, and hopefully be inspired to see how you can use it for your needs.

Create a Concept

Configuring monitoring

Coming soon!

Conclusion

Snow Owl is both a simple and complex product. We’ve so far learned the basics of what it is, how to look inside of it, and how to work with it using some of the available APIs. Hopefully this tutorial has given you a better understanding of what Snow Owl is and more importantly, inspired you to further experiment with the rest of its great features!

Setting JVM options

You should rarely need to change Java Virtual Machine (JVM) options. If you do, the most likely change is setting the heap size.

The preferred method of setting JVM options (including system properties and JVM flags) is via the the SO_JAVA_OPTS environment variable. For instance:

export SO_JAVA_OPTS="$SO_JAVA_OPTS -Djava.io.tmpdir=/path/to/temp/dir"
./bin/startup

When using the RPM or Debian packages, SO_JAVA_OPTS can be specified in the system configuration file.

circle-info

Some other Java programs support the JAVA_OPTS environment variable. This is not a mechanism built into the JVM but instead a convention in the ecosystem. However, we do not support this environment variable, instead supporting setting JVM options via the environment variable SO_JAVA_OPTS as above.

Check Health

Let’s start with a basic health check, which we can use to see how our instance is doing. We’ll be using curl to do this but you can use any tool that allows you to make HTTP/REST calls. Let’s assume that we are still on the same node where we started Snow Owl on and open another command shell window.

To check the instance status/health, we will be using the . You can run the command by clicking the "Copy" link on the right side and pasting it into a terminal.

And the response:

In the response, we can see the version of our instance along with the available repositories and their health status (eg. SNOMED CT with status GREEN).

Configuring Snow Owl

Snow Owl ships with good defaults and requires very little configuration.

hashtag
Config files location

Snow Owl has three configuration files:

Installing Snow Owl

Snow Owl is provided in the following package formats:

Elasticsearch configuration

By default, Snow Owl is starting and connecting to an embedded Elasticsearch cluster available on http://localhost:9200. This cluster has only a single node and its discovery method is set to single-node, which means it is not able to connect to other Elasticsearch clusters and will be used exclusively by Snow Owl.

This single node Elasticsearch cluster can easily serve Snow Owl in testing, evaluation and small authoring environments, but it is recommended to customize how Snow Owl connects to an Elasticsearch cluster in larger environments (especially when planning to scale with user demand).

You have two options to configure Elasticsearch used by Snow Owl.

Basic Concepts

There are a few concepts that are core to Snow Owl. Understanding these concepts from the outset will tremendously help ease the learning process.

hashtag
Terminology / Code System

A terminology (also known as code system, classification and/or ontology) defines and encapsulates a set of terminology components (eg. set of codes with their meanings) and versions. A terminology is identified by a unique name and stored in a repository. Multiple code systems can exist in a single repository besides each other as long as their name is unique.

Stopping Snow Owl

An orderly shutdown of Snow Owl ensures that Snow Owl has a chance to cleanup and close outstanding resources. For example, an instance that is shutdown in an orderly fashion will initiate an orderly shutdown of the embedded Elasticsearch instance, gracefully close and disconnect connections and perform other related cleanup activities. You can help ensure an orderly shutdown by properly stopping Snow Owl.

If you’re running Snow Owl as a service, you can stop Snow Owl via the service management functionality provided by your installation.

If you’re running Snow Owl directly, you can stop Snow Owl by sending Ctrl-C if you’re running Snow Owl in the console, or by invoking the provided shutdown script as follows:

Explore Snow Owl

Now that we have our instance up and running, the next step is to understand how to communicate with it. Fortunately, Snow Owl provides very comprehensive and powerful APIs to interact with your instance.

hashtag
REST API

Among the few things that can be done with the API are as follows:

Scenarios

This section describes the use case scenarios present in the world of SNOMED CT and how Snow Owl can be used in those scenarios to maximize its full potential. Each scenario comes with a summary and a pros/cons section to help your decision making process when selecting the appropriate scenario for your use case.

Core API

Detailed API documentation is coming soon! Until then we recommend to check out the official Swagger documentation available on your Snow Owl instance at .

Set up Snow Owl

This section includes information on how to setup Snow Owl and get it running, including:

  • Downloading

  • Installing

hashtag
Configure the embedded instance

The first option is to configure the underlying Elasticsearch instance by editing the configuration file elasticsearch.yml, which depending on your installation is available in the configuration directory (you can create the file, if it is not available, Snow Owl will pick it up during the next startup).

circle-info

The embedded Elasticsearch version is 6.3.2. If you are configuring it to connect to an existing Elasticsearch cluster, then make sure that the cluster version matches with this version.

hashtag
Connect to a remote cluster

The second option is to configure Snow Owl to use a remote Elasticsearch cluster without the embedded instance. In order to use this feature you need to set the repository.index.clusterUrl configuration parameter to the remote address of your Elasticsearch cluster. When Snow Owl is configured to connect to a remote Elasticsearch cluster, it won't boot up the embedded instance, which reduces the memory requirements of Snow Owl slightly.

You can connect to self-hosted clusters or hosted solutions provided by AWSarrow-up-right and Elastic.coarrow-up-right for example.

hashtag
Terminology Component

A terminology component is a basic element in a code system with actual clinical meaning or use. For example in SNOMED CT, the Concept, Description, Relationship and Reference Set Member are terminology components.

hashtag
Version

A version that refers to an important snapshot in time, consistent across many terminology components, also known as tag or label. It is often created when the state of the terminology is deemed to be ready to be published and distributed to downstream customers or for internal use. A version is identified by its version ID (or version tag) within a given code system.

hashtag
Repository

A repository manages changes to a set of data over time in the form of revisions. Conceptually very similar to a source code repository (like a Git repository), but information stored in the repository must conform to a predefined schema (eg. SNOMED CT Concepts RF2 schema) as opposed to storing it in pure binary or textual format. This way a repository can support various full-text search functionalities, semantical queries and evaluations on the stored, revision-controlled terminology data.

A repository is identified by a name and this name is used to refer to the repository when performing create, read, update, delete and other operations against the revisions in it. Repositories organize revisions into branches and commits.

hashtag
Revision

A revision is the basic unit of information stored in a repository about a terminology component or artifact. It contains two types of information:

  • one is the actual data that you care about, for example a single code from a code system with its meaining and properties.

  • the other is revision control information (aka revision metadata). Each revision is identified by a random Universally Unique IDentifier (UUID) that is assigned when performing a commit in the repository. Also, during a commit each revision is associated with a branch and timestamp. Revisions can be compared, restored, and merged.

hashtag
Branch

A set of components under version control may be branched or forked at a point in time so that, from that time forward, two copies of those components may develop at different speeds or in different ways independently of each other. At later point in time the changes made on one of these branches can be merged into the other.

Branches are organized into hierarchies like directories in file systems. A child branch has access to all of the information that is stored on its parent branch up until its baseTimestamp, which is the time the branch was created. Each repository has a predefined root branch, called MAIN.

hashtag
Commit

A commit represents a set of changes made against a branch in a repository. After a successful commit, the changes made by the commit are immediately available and searchable on the given branch.

hashtag
Merge / Rebase

A merge/rebase is an operation in which two sets of changes are applied to set of components. A merge/rebase always happens between two branches, denoting one as the source and the other as the target of the operation.

Check your instance health, status, and statistics
  • Administer your instance data

  • Perform CRUD (Create, Read, Update, and Delete) and search operations against your terminologies

  • Execute advanced search operations such as paging, sorting, filtering, scripting, aggregations, and many others

  • Starting
  • Configuring

  • hashtag
    Java (JVM) Version

    Snow Owl is built using Java, and requires at least Java 11 in order to run. Only Oracle’s Java and the OpenJDK are supported. The same JVM version should be used on all Snow Owl nodes and clients.

    We recommend installing Java version 11.0.x or a later version in the Java 11 release series. We recommend using a supported LTS version of Java.

    The version of Java that Snow Owl will use can be configured by setting the JAVA_HOME environment variable.

    Multi Extension

    Single Edition
    Single Extension
    /snowowl/adminarrow-up-right

    Number of threads

    Snow Owl uses a number of thread pools for different types of operations. It is important that it is able to create new threads whenever needed. Make sure that the number of threads that the Snow Owl user can create is at least 4096.

    This can be done by setting ulimit -u 4096 as root before starting Snow Owl, or by setting nproc to 4096 in /etc/security/limits.conf.

    The package distributions when run as services under systemd will configure the number of threads for the Snow Owl process automatically. No additional configuration is required.

    Logging configuration

    Snow Owl uses SLF4Jarrow-up-right and Logbackarrow-up-right for logging.

    The logging configuration file (serviceability.xml) can be used to configure Snow Owl logging. The logging configuration file location depends on your installation method, by default it is located in the ${SO_HOME}/configuration folder.

    Extensive information on how to customize logging and all the supported appenders can be found on the Logback documentationarrow-up-right.

    Integrations

    Configuration reference

    Whenever we ask for the status, we either get GREEN, YELLOW, or RED and an optional diagnosis message.
    • Green - everything is good (repository is fully functional)

    • Yellow - some data or functionality is not available, or diagnostic operation is in progress (repository is partially functional)

    • Red - diagnostic operation required in order to continue (repository is not functional)

    curl http://localhost:8080/snowowl/admin/info
    Admin APIarrow-up-right
    $ ./bin/shutdown

    Tweaking for performance

    hashtag
    Scheduler

    # noop I/O scheduler, should be set in eg. /etc/rc.local for solid state disks:
    echo noop > /sys/block/sdX/queue/scheduler

    Virtual memory

    Snow Owl uses a mmapfs directory by default to store its data. The default operating system limits on mmap counts is likely to be too low, which may result in out of memory exceptions.

    On Linux, you can increase the limits by running the following command as root:

    sysctl -w vm.max_map_count=262144

    To set this value permanently, update the vm.max_map_count setting in /etc/sysctl.conf. To verify after rebooting, run sysctl vm.max_map_count.

    The RPM and Debian packages will configure this setting automatically. No further configuration is required.

    snowowl.yml for configuring Snow Owl
  • serviceability.xml for configuring Snow Owl logging

  • elasticsearch.yml for configuring the underlying Elasticsearch instance in case of embedded deployments

  • These files are located in the config directory, whose default location depends on whether or not the installation is from an archive distribution (tar.gz or zip) or a package distribution (Debian or RPM packages).

    For the archive distributions, the config directory location defaults to $SO_PATH_HOME/configuration. The location of the config directory can be changed via the SO_PATH_CONF environment variable as follows:

    Alternatively, you can export the SO_PATH_CONF environment variable via the command line or via your shell profile.

    For the package distributions, the config directory location defaults to /etc/snowowl. The location of the config directory can also be changed via the SO_PATH_CONF environment variable, but note that setting this in your shell is not sufficient. Instead, this variable is sourced from /etc/default/snowowl (for the Debian package) and /etc/sysconfig/snowowl (for the RPM package). You will need to edit the SO_PATH_CONF=/etc/snowowl entry in one of these files accordingly to change the config directory location.

    hashtag
    Config file format

    The configuration format is YAMLarrow-up-right. Here is an example of changing the path of the data directory:

    Settings can also be flattened as follows:

    hashtag
    Environment variable substitution

    Environment variables referenced with the ${...} notation within the configuration file will be replaced with the value of the environment variable, for instance:

    {
      "version": "<version>",
      "description": "You Know, for Terminologies",
      "repositories": {
        "items": [
          {
            "id": "snomedStore",
            "health": "GREEN"
          }
        ]
      }
    }
    SO_PATH_CONF=/path/to/my/config ./bin/startup
    path:
        data: /var/lib/snowowl
    path.data: /var/lib/snowowl
    repository.host: ${HOSTNAME}
    repository.port: ${SO_REPOSITORY_PORT}

    deb

    The deb package is suitable for Debian, Ubuntu, and other Debian-based systems. Debian packages may be downloaded from the Downloads section.

    docker

    Images are available for running Snow Owl as Docker containers. They may be downloaded from the official Docker Hub Registry.

    Package

    Description

    zip/tar.gz

    The zip and tar.gz packages are suitable for installation on any system and are the easiest choice for getting started with Snow Owl on most systems. Install Snow Owl with tar.gz or zip

    rpm

    The rpm package is suitable for installation on Red Hat, Centos, SLES, OpenSuSE and other RPM-based systems. RPMs may be downloaded from the Downloads section.

    Import RF2 distribution

    Now let's import an official SNOMED CT RF2 SNAPSHOT distribution archive so that we can further explore the available SNOMED CT APIs.

    To import an RF2 archive you must first create an import configuration using the SNOMED CT Import APIarrow-up-right as follows:

    curl -X POST http://localhost:8080/snowowl/snomed-ct/v3/imports -d
    {
      "type": "SNAPSHOT",
      "branchPath": "MAIN"
    }

    And the response:

    The import configuration specifies the type of the RF2 release (in this case SNAPSHOT) and the target branchPath where the content should imported. The response returns an empty body along with a Location header with a URL pointing to the created import configuration. You can extract the last part of the URL to get the import configuration ID which can be used to retrieve the configuration and to upload the actual archive and start the import.

    circle-info

    Depending on the size and type of the RF2 package, hardware and Snow Owl configuration, RF2 imports might take hours to complete. Official SNAPSHOT distributions can be imported in less than 30 minutes by allocating 6 GB of heap size to Snow Owl and configuring Snow Owl to use a solid state disk for its data directory.

    The import will start automatically when you upload the archive to the /imports/:id/archive endpoint:

    The import process is asynchronous and its status can be checked by sending a GET request to the /imports/:id endpoint with the extracted import identifier as follows:

    And the response:

    The status field describes the current state of the import, while the startDate and completionDate fields specify start and completion timestamps.

    Starting Snow Owl

    The method for starting Snow Owl varies depending on how you installed it.

    hashtag
    Archive packages (.tar.gz, .zip)

    If you installed Snow Owl with a .tar.gz or zip package, you can start Snow Owl from the command line.

    hashtag
    Running Snow Owl from the command line

    Snow Owl can be started from the command line as follows:

    By default, Snow Owl runs in the foreground, prints some of its logs to the standard output (stdout), and can be stopped by pressing Ctrl-C.

    circle-info

    All scripts packaged with Snow Owl assume that Bash is available at /bin/bash. As such, Bash should be available at this path either directly or via a symbolic link.

    hashtag
    Running as a daemon

    To run Snow Owl as a daemon, use the following command:

    Log messages can be found in the $SO_HOME/serviceability/logs/ directory.

    circle-info

    The startup scripts provided in the RPM and Debian packages take care of starting and stopping the Snow Owl process for you.

    hashtag
    RPM packages

    Snow Owl is not started automatically after installation. How to start and stop Snow Owl depends on whether your system uses SysV init or systemd (used by newer distributions). You can tell which is being used by running this command:

    hashtag
    Running Snow Owl with SysV init

    Use the chkconfig command to configure Snow Owl to start automatically when the system boots up:

    Snow Owl can be started and stopped using the service command:

    If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/.

    hashtag
    Running Snow Owl with systemd

    To configure Snow Owl to start automatically when the system boots up, run the following commands:

    Snow Owl can be started and stopped as follows:

    These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/.

    hashtag
    Debian packages (Coming Soon)

    hashtag
    Docker images (Coming Soon)

    Important Snow Owl configuration

    While Snow Owl requires very little configuration, there are a number of settings which need to be considered before going into production.

    The following settings must be considered before going to production:

    hashtag
    Elasticsearch settings

    By default, Snow Owl includes the OSS version of Elasticsearch and runs it in embedded mode to store terminology data and make it available for search. This is convenient for single node environments (eg. for evaluation, testing and development), but it might not be sufficient when you go into production.

    List available Code Systems

    Now let's take a peek at our code systems:

    And the response:

    Which means, we have a single Code System in Snow Owl, called SNOMED CT. It has been created by the SNOMED CT module by default on the first startup of your instance. A Code System lives in a repository and its working branchPath is currently associated with the default MAIN branch in the snomedStore repository.

    SNOMED CT

    Now that we have a SNOMED CT Code System, let's take a look at its content. We can query its content using either the or the .

    For sake of simplicity, let's search for the available concepts using the . For that we will need the branch we would like to query, but fortunately we already know the value from our previous call to the Code Systems API, it was MAIN. To list all available concepts in a SNOMED CT Code System, use the following command:

    And the response is:

    Which simply means we have no SNOMED CT concepts yet in our instance.

    Multi Extension Authoring

    hashtag
    Multi Extension Authoring and Distribution

    On top of single Edition/Extension distribution and authoring, Snow Owl provides full support for multi-SNOMED CT distribution and authoring even if the Extensions depend on different versions of the SNOMED CT International Edition.

    To achieve a deployment like this you need to perform the same initialization steps for each desired SNOMED CT Extension as if it were a single extension scenario (see ). Development and maintenance of each managed extension can happen in parallel without affecting one or the other. Each of them can have their own release cycles, maintenance and upgrade schedules, and so on.

    Disable swapping

    Most operating systems try to use as much memory as possible for file system caches and eagerly swap out unused application memory. This can result in parts of the JVM heap or even its executable pages being swapped out to disk.

    Swapping is very bad for performance, and should be avoided at all costs. It can cause garbage collections to last for minutes instead of milliseconds and can cause services to respond slowly or even time out.

    There are two approaches to disabling swapping. The preferred option is to completely disable swap, but if this is not an option, you can minimize swappiness.

    hashtag

    Releases

    When an Extension reaches the end of its current development cycle, it needs to be prepared for release and distribution.

    hashtag
    Workflows and Authoring Branches

    All planned content changes that are still on their dedicated branch either need to be integrated with the main development version or removed from the scope of the next release.

    Configuring a file realm

    You can manage and authenticate users with the built-in file realm. All the data about the users for the file realm is stored in the users file. The file is located in SO_PATH_CONF and is read on startup.

    You need to explicitly select the file realm in the snowowl.yml configuration file in order to use it for authentication.

    In the above configuration the file realm is using the users file to read your users from. Each row in the file represents a username and password delimited by : character. The passwords are BCrypt encrypted hashes. The default users

    Installation

    Snow Owl requires Java 11 or newer version. Specifically as of this writing, it is recommended that you use JDK (Oracle of OpenJDK is preferred) version 11.0.2. Java installation varies from platform to platform so we won’t go into those details here. Oracle’s recommended installation documentation can be found on Oracle’s website. Suffice to say, before you install Snow Owl, please check your Java version first by running (and then install/upgrade accordingly if needed):

    Once we have Java set up, we can then download and run Snow Owl. The binaries are available at the pages. For each release, you have a choice among a zip or tar archive, a DEB or RPM package.

    hashtag
    Installation example with zip

    File descriptors

    circle-info

    This is only relevant if you are running Snow Owl with an embedded Elasticsearch and not connecting it to an existing cluster.

    Snow Owl (with embedded Elasticsearch) uses a lot of file descriptors or file handles. Running out of file descriptors can be disastrous and will most probably lead to data loss. Make sure to increase the limit on the number of open files descriptors for the user running Snow Owl to 65,536 or higher.

    For the .zip and .tar.gz packages, set ulimit -n 65536

    HTTP 204 No Content
    Location: "http://localhost:8080/snowowl/snomed-ct/v3/imports/96406e91-84a0-49d3-9e6a-c5c652a36eba"
    as root before starting Snow Owl, or set
    nofile
    to
    65536
    in
    /etc/security/limits.conf
    .

    RPM and Debian packages already default the maximum number of file descriptors to 65536 and do not require further configuration.

    Install Snow Owl with RPM
    Install Snow Owl with Debian Package
    Install Snow Owl with Docker

    Importing RF2

    Classification

    Backup and Restore

    Snow Owl 7 uses a single data source, an Elasticsearch cluster (either embedded or external). To backup and restore the data, we highly recommend the official Snapshot and Restorearrow-up-right feature from Elasticsearch. On top of that API, we highly recommend using tools, like Curator to ease the lifecycle management of your Elasticsearch cluster and your indices. See Curator herearrow-up-right.

    circle-info

    Reminder: for production environment we highly recommend using an external Elasticsearch cluster as opposed to the embedded one. External Elasticsearch clusters are more customizable and can be configured to use other snapshot repository types, like Amazon S3, HDFS, etc.

    Below you can find a very simple guide on how to configure the backup and restore process for your Snow Owl Terminology Server using Curator.

    Exporting RF2

    Version SNOMED CT

    Commits

    Concepts

    Coming soon!

    CIS API

    This describes the resources that make up the official Snow Owl® CIS API.

    circle-info

    Swagger documentation available on your Snow Owl instance at /snowowl/cisarrow-up-right.

    Relationships

    Coming soon!

    curl -X POST -F file=@SnomedCT_RF2Release_INT_20170731.zip 'http://localhost:8080/snowowl/snomed-ct/v3/imports/96406e91-84a0-49d3-9e6a-c5c652a36eba/archive'
    curl http://localhost:8080/snowowl/admin/codesystems
    {
      "items": [
        {
          "oid": "2.16.840.1.113883.6.96",
          "name": "SNOMED CT",
          "shortName": "SNOMEDCT",
          "organizationLink": "http://www.snomed.org",
          "primaryLanguage": "ENG",
          "citation": "SNOMED CT contributes to the improvement of patient care by underpinning the development of Electronic Health Records that record clinical information in ways that enable meaning-based retrieval. This provides effective access to information required for decision support and consistent reporting and analysis. Patients benefit from the use of SNOMED CT because it improves the recording of EHR information and facilitates better communication, leading to improvements in the quality of care.",
          "branchPath": "MAIN",
          "iconPath": "icons/snomed.png",
          "terminologyId": "com.b2international.snowowl.terminology.snomed",
          "repositoryUuid": "snomedStore"
        }
      ]
    }
    curl http://localhost:8080/snowowl/snomed-ct/v3/MAIN/concepts
    {
      "items": [],
      "limit": 50,
      "total": 0
    }
    SNOMED CT APIarrow-up-right
    FHIR APIarrow-up-right
    SNOMED CT APIarrow-up-right
    file comes with a default
    snowowl
    user with the default
    snowowl
    password.

    hashtag
    Users Command

    To simplify file realm configuration, the Snow Owl CLI comes with a command to add a user to the file realm (snowowl users add). See the command help manual (-h option) for further details.

    hashtag
    Authorization

    The file security realm does NOT support the Authorization formats at the moment. If you are interested in configuring role-based access control for your users, it is recommended to switch to the LDAP security realm.

    identity:
      providers:
        - file:
            name: users

    To configure Snow Owl to connect to an Elasticsearch cluster, change the clusterUrl property in the snowowl.yml configuration file:

    The value for this setting should be a valid HTTP URL point to the HTTP API of your Elasticsearch cluster, which by default runs on port 9200.

    hashtag
    Path settings

    If you are using the .zip or .tar.gz archives, the data and logs directories are sub-folders of $SO_HOME. If these important folders are left in their default locations, there is a high risk of them being deleted while upgrading Snow Owl to a new version.

    In production use, you will almost certainly want to change the locations of the data and log folders.

    The RPM and Debian distributions already use custom paths for data and logs.

    hashtag
    Network settings

    To allow clients to connect to Snow Owl, make sure you open access to the following ports:

    • 8080/TCP:: Used by Snow Owl Server's REST API for HTTP access

    • 8443/TCP:: Used by Snow Owl Server's REST API for HTTPS access

    • 2036/TCP:: Used by the Net4J binary protocol connecting Snow Owl clients to the server

    hashtag
    Setting the heap size

    By default, Snow Owl tells the JVM to use a heap with a minimum and maximum size of 2 GB. When moving to production, it is important to configure heap size to ensure that Snow Owl has enough heap available.

    To configure the heap size settings, change the -Xms and -Xmx settings in the SO_JAVA_OPTS environment variable.

    The value for these setting depends on the amount of RAM available on your server and whether you are running Elasticsearch on the some node as Snow Owl (either embedded or as a service) or running it in its own cluster. Good rules of thumb are:

    • Set the minimum heap size (Xms) and maximum heap size (Xmx) to be equal to each other.

    • Too much heap can subject to long garbage collection pauses.

    • Set Xmx to no more than 50% of your physical RAM, to ensure that there is enough physical RAM left for kernel file system caches.

    • Snow Owl connecting to a remote Elasticsearch cluster requires less memory, but make sure you still allocate enough for your use cases (classification, batch processing, etc.).

    Disable all swap files

    Usually Snow Owl is the only service running on a box, and its memory usage is controlled by the JVM options. There should be no need to have swap enabled.

    On Linux systems, you can disable swap temporarily by running:

    To disable it permanently, you will need to edit the /etc/fstab file and comment out any lines that contain the word swap.

    hashtag
    Configure swappiness

    Another option available on Linux systems is to ensure that the sysctl value vm.swappiness is set to 1. This reduces the kernel’s tendency to swap and should not lead to swapping under normal circumstances, while still allowing the whole system to swap in emergency conditions.

    sudo swapoff -a
    # sysctl settings, to be added to /etc/sysctl.conf or equivalent
    vm.swappiness = 1
    vm.max_map_count = 262144
    For simplicity, let's use a zip file.

    Let's download the most recent Snow Owl release as follows:

    Then extract it as follows:

    It will then create a bunch of files and folders in your current directory. We then go into the bin directory as follows:

    And now we are ready to start the instance:

    hashtag
    Successfully running instance

    If everything goes well with the installation, you should see a bunch of log messages that look like below:

    java -version
    echo $JAVA_HOME
    Releasesarrow-up-right
    curl -L -O https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.zip
    curl TODO
    {
      "type": "SNAPSHOT",
      "branchPath": "MAIN",
      "createVersions": false,
      "codeSystemShortName": "SNOMEDCT",
      "id": "ec702c17-88b7-454b-9ebc-d2d1e338658e",
      "status": "RUNNING",
      "startDate": "2018-10-10T10:01:08Z"
    }
    ./bin/startup
    nohup ./bin/startup > /dev/null &
    ps -p 1
    sudo chkconfig --add snowowl
    sudo -i service snowowl start
    sudo -i service snowowl stop
    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable snowowl.service
    sudo systemctl start snowowl.service
    sudo systemctl stop snowowl.service
    repository:
      index:
        clusterUrl: http://your.es.cluster:9200 # the ES cluster URL
        clusterUsername: snowowl # Optional username to connect to a protected ES cluster
        clusterPassword: snowowl_password # Optional password to connect to a protected ES cluster
    path:
      data: /var/data/snowowl
    # Set the minimum and maximum heap size to 12 GB.
    SO_JAVA_OPTS="-Xms12g -Xmx12g" ./bin/startup
    unzip snow-owl-oss-<version>.zip
    cd snow-owl-oss-<version>/bin
    ./startup
    TODO example output

    hashtag
    Next steps

    After you have initialized your Snow Owl instance with the Extensions you'd like to maintain the next steps are:

    • Development

    • Release

    • Upgrade

    single extension
    Snow Owl Multi-Extension Deployment
    hashtag
    Prepare the Release

    After all development branches have been merged and integrated with the main work-in-progress version, the Extension needs to be prepared for release. This usually involves last minute fixes, running quality checks and validation rules and generating the final necessary normal form of the Extension.

    Last minute changes before the Release

    hashtag
    Release

    When all necessary steps have been performed successfully, a new Code System Version needs to be created in Snow Owl to represent the latest release. The versioning process will assign the requested effectiveTime to all unpublished components, update the necessary Metadata reference sets (like the Module Dependency Reference Set) and finally create a version branch to reference this release later.

    Releasing a SNOMED CT Extension

    hashtag
    Packaging

    After a successful release, an RF2 Release Package needs to be generated for downstream consumers of your Extension. Snow Owl can generate this final RF2 Release Packages for the newly released version via the RF2 Export APIarrow-up-right.

    SNOMED CT Extension Feature Branches

    Extensions and Snow Owl

    Snow Owl is a multi-purpose terminology server with a main focus on SNOMED CT International Edition and its Extensions. Whether you are a producer of a SNOMED CT Extension or a consumer of one, Snow Owl has you covered. As always, feel free to ask your questions regarding any of the content you read here (raise a ticket on GitHub Issuesarrow-up-right).

    hashtag
    Snow Owl Concepts

    Snow Owl uses the following basic concepts to provide authoring and maintenance support for SNOMED CT Extensions.

    hashtag
    Code Systems

    From the page, we've learned what is a Repository and how Code Systems are defined as part of a Repository.

    circle-info

    Reminder: a Repository is a set of schemas and functionality to provide support for a dedicated set of Code Systems, eg. the SNOMED CT Repository stores all SNOMED CT related components under revision control and provides quick access). A Repository can contain one or more Code Systems and by default always comes with one predefined Code System, the root Code System (in case of SNOMED CT, this is often represents the International Edition).

    SNOMED CT Extensions in Snow Owl are basically Code Systems with their own set of properties and characteristics. With Snow Owl's Code System API, a Code System can be created for each SNOMED CT Extension to easily identify the Code System and its components with a single unique identifer, called the Code System short name. The recommended naming approach when selecting the unique short name identifier is the following:

    • SNOMED CT International Edition: SNOMEDCT - often included in other editions for distribution purposes

    • National Release Center (single maintained extension) - SNOMEDCT-US - represents the SNOMED CT United States of America Extension

    The primary namespace identifer and set of modules and languages can be set during the creation of the Code System, and can be updated later on if required. These properties can be used when users are accessing the terminology server for authoring purposes to provide a seamless authoring experience for the user without them needing to worry about selecting the proper namespace, modules, language tags, etc. (NOTE: this feature is not available yet in the OSS version of Snow Owl)

    hashtag
    Extension Of

    A Snow Owl Code System can be marked as an extensionOf another Code System, which ties them together, forming a dependency between the two Code Systems. A Code System can have multiple Extension Code Systems, but a Code System can only be extensionOf a single Code System.

    hashtag
    Branching

    In Snow Owl, a Repository maintains a set of branches and Code Systems are always attached to a dedicated branch. For example, the default root Code Systems are always tied to the default branch, called MAIN. When creating a new Code System, the "working" branchPath can be specified and doing so assigns the branch to the Code System. A Code System cannot be attached to multiple branches at the same time, and a branch can only be assigned to a single Code System in a Repository. Snow Owl's branching infrastructure allows the use of isolated environments for both distribution and authoring workflows, therefore they play crucial role in SNOMED CT Extension managament as well. They also provide the support for seamless upgrade mechanism, which can be done whenever there is a new version available in one of your SNOMED CT Extension's dependent Code Systems.

    hashtag
    Versions

    As in real life, a Code System can have zero or more versions (or with another name, releases). A version is a special branch that is created during the versioning process and makes the currently available latest content accessible later in its current form. Since SNOMED CT Extensions can have releases as well, creating a Code System Version in Snow Owl is a must in order to produce the release packages.

    hashtag
    Examples

    The following image shows the repository content rendered from the available commits, after a successful International Edition import.

    Dots represent commits made with the commit message on the right. Green boxes represent where the associated branch's HEAD is currently located. Blue tag labels represent versions created during the commit.

    If your use case would be to import the SNOMED CT US Extension 2019-09-01 version into this repository, then ideally it would look like this:

    The next section describes the use case scenarios in the world of SNOMED CT and the recommended approaches for deploying these scenarios in Snow Owl.

    Single Extension Authoring

    A typical extension scenario is the development of the extension itself. Whether you are starting your extension from scratch or already have a well-developed version that you need to maintain, the first choice you need to make is to identify the dependencies of your SNOMED CT Extension.

    hashtag
    Extending the International Edition

    If your Extension extends the SNOMED CT International Edition directly, then you need to pick one of the available International Edition versions:

    • If you are starting from scratch, it is always recommended to select the latest International Release as the starting point of your Extension.

    • If you have an existing Extension then you probably already know the International Release version your Extension depends on.

    When you have identified the version you need to depend on then you need to import that version (or a later release packages that also includes that version in its FULL RF2 package) first into Snow Owl. Make sure that the createVersion feature of the RF2 import process is enabled, so it will automatically create the versions for each imported RF2 effectiveTime value.

    After you have successfully imported all dependencies into Snow Owl, the next step is to create a Code System that represents your SNOMED CT Extension (see ). When creating the Code System, besides specifying the namespace and optional modules and languages, you need to enter a Code System shortName, which will serve as the unique identifier of your Extension and select the extensionOf value, which represents the dependency of the Code System.

    After you have successfully created the Code System representing your Extension, you can import any existing content from a most recent release or start from scratch by creating the module concept of your extension.

    circle-info

    RF2 releases tend to have content issues with the International Edition itself or refer to missing content when you try to import them into Snow Owl via the RF2 Import API. For this reason, the recommended way is to always use the most recent Snapshot RF2 release of a SNOMED CT Extension to form its first representation in Snow Owl. That has a high probability of success without any missing component dependency errors during import. If you are having trouble importing an RF2 Release Package into Snow Owl, feel free to raise a question on our page.

    hashtag
    Extending another Extension

    If your Extension needs to extend another Extension and not the International Edition itself, then you need to identify the version you'd like to depend on in that Extension (that indirectly will select the International Edition dependency as well). When you have identified all required versions, then starting from the International Edition recursively traverse back and repeat the RF2 Import and Code System creation steps described in the previous section until you have finally imported your extension. In the end your extension might look like this, depending on how many Extensions you are depending on.

    hashtag
    Summary

    Setting up a Snow Owl deployment like this is not an easy task. It requires a thorough understanding of each SNOMED CT Extension you'd like to import and their dependencies as well. However, after the initial setup, the maintenance of your Extension becomes straightforward, thanks to the clear distinction from the International Edition and from its other dependencies. The release process is easier and you can choose to publish your Extension as an extension only release, or as an Edition or both (see ). Additionally, when a new version is available in one of the dependencies, you will be able to upgrade your Extension with the help of automated validation rules and upgrade processes (see ). From the distribution perspective, this scenario shines when you need to maintain multiple Extensions/Editions in a single deployment.

    Pros:

    • Excellent for authoring and maintenance

    • Good for distribution

    Cons:

    • Harder to set up the initial deployment

    Development

    Authoring is the process by which content is created in an extension in accordance with a set of authoring principles. These principles ensure the quality of content and referential integrity between content in the extension and content in the International Edition (the principles are set by SNOMED International, can be found herearrow-up-right).

    During the extension development process authors are:

    • creating, modifying or inactivating content according to editorial principles and policies

    • running validation processes to verify the quality and integrity of their Extension

    • classifying their authored content with an OWL Reasoner to produce its distribution normal form

    The authors directly (via the available REST and FHIR APIs) or indirectly (via user interfaces, scripts, etc.) work with the Snow Owl Terminology Server to make the necessary changes for the next planned Extension release.

    hashtag
    Workflow and Editing

    Authors often require a dedicated editing environment where they can make the necessary changes and let others review the changes they have made, so errors and issues can be corrected before integrating the change with the rest of the Extension. Similarly to how SNOMED CT Extensions are separated from the SNOMED CT International Edition and other dependencies, this can be achieved by using branches.

    • - to create and merge branches

    • - to compare branches

    hashtag
    Authoring APIs

    To let authors make the necessary changes they need, Snow Owl offers the following SNOMED CT component endpoints to work with:

    • - to create, edit SNOMED CT Concepts

    • - to create, edit SNOMED CT Descriptions

    • - to create, edit SNOMED CT Relationships

    hashtag
    Validation

    To verify quality and integrity of the changes they have made, authors often generate reports and make further fixes according to the received responses. In Snow Owl, reports and rules can be represented with validation queries and scripts.

    • - to run validation rules and fetch their reported issues on a per branch basis

    hashtag
    Classification

    Last but not least, authors run an OWL Reasoner to classify their changes and generate the necessary normal form of their Extension. The provides support for running these reasoner instances and generating the necessary normal form.

    Single Edition

    The most common use case to consume a SNOMED CT Release Package is to import it directly into a Terminology Server (like Snow Owl) and make it available as read-only content for both human and machine access (via REST and FHIR APIs).

    hashtag
    SNOMED CT International Edition

    Since Snow Owl by default comes with a pre-initialized SNOMED CT Code System called SNOMEDCT, it is just a single call to import the official RF2 package using the SNOMED CT RF2 Import APIarrow-up-right The import by default creates a Code System Version for each SNOMED CT Effective Date available in the supplied RF2 package. After a successful import the content is immediately available via REST and FHIR APIs.

    hashtag
    SNOMED CT Extension Edition

    National Release Centers and other Care Providers provide their own SNOMED CT Edition distribution for third-party consumers in RF2 format. Importing their Edition distribution instead of the International Edition directly into the SNOMEDCT pre-initialized SNOMED CT Code System with the same makes both the International Edition (always included in Edition packages) and the National Extension available for read-only access.

    hashtag
    Summary

    The single edition scenario without much effort provides access to any SNOMED CT Edition directly on the pre-initialized SNOMEDCT Code System. It is easy to set up and maintain. Because of its flat structure, it is good for distribution and extension consumers. Although it can be used for authoring in certain scenarios, due to the missing distinction between the International Edition and the Extension, it is not the best choice for extension authoring and maintenance.

    circle-info

    This scenario can be further extended to support multiple simultaneous Edition releases living on their own dedicated SNOMED CT Code Systems. The Root SNOMEDCT Code System in this case is empty and only serves the purpose of creating other Code Systems "underneath" it. Each SNOMED CT Code System is then imported into its own dedicated branch forming a star-like branch structure at the end (zero-length MAIN branch and content branches). This is useful in distribution scenarios, where multiple Extension Code Systems need to be maintained with their own dedicated set of dependencies and there is no time to set up the proper Extension Scenario (see next section). The only drawback of this setup is the potentially high usage of disk space due to the overlap between the various Editions imported into their own Code Systems (since each of them contains the entire International Release).

    Pros:

    • Good for maintaining the SNOMED CT International Edition

    • Good for distribution

    • Simple to set up and maintain

    Cons:

    • Not recommended for extension authoring and maintenance

    • Not recommended for multi-extension distribution scenarios

    Search SNOMED CT

    hashtag
    GET the ROOT concept:

    curl 'http://localhost:8080/snowowl/snomed-ct/v3/MAIN/concepts/138875005'

    And the response:

    {
      "id": "138875005",
      "released": true,
      "active": true,
      "effectiveTime": "20020131",
      "moduleId": "900000000000207008",
      "iconId": "138875005",
      "definitionStatus": "PRIMITIVE",
      "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES"
    }

    hashtag
    Search by ECL:

    And the response:

    Configuring security

    Snow Owl security features enable you to easily secure your terminology server. You can password-protect your data as well as implement more advanced security measures such as role-based access control and auditing.

    hashtag
    Realms

    By default Snow Owl comes without any security features enabled and all read and write operations are unprotected. To configure a security realm, you can choose from the following built-in identity providers:

    Important System configuration

    Ideally, Snow Owl should run alone on a server and use all of the resources available to it. In order to do so, you need to configure your operating system to allow the user running Snow Owl to access more resources than allowed by default.

    The following settings must be considered before going to production:

    SNOMED CT API

    This describes the resources that make up the official Snow Owl® SNOMED CT Terminology API.

    circle-info

    Swagger documentation available on your Snow Owl instance at .

    hashtag

    ConceptMap

    hashtag
    ConceptMap API

    The endpoints /ConceptMap and /ConceptMap/{conceptMapId} and corresponding operations expose the following types of terminology resources:

    Configure a file realm

  • Configure an LDAP realm

  • hashtag
    Authentication

    After configuring at least one security realm, Snow Owl will authenticate all incoming requests to ensure that the sender of the request is allowed to access the terminology server and its contents. To authenticate a request, the client must send an HTTP Basic or Bearer Authorization header with the request. The value should be a user/pass pair in case of using Basic authentication or a JWTarrow-up-right token generated by Snow Owl if using the Bearer method.

    circle-info

    NOTE: It is recommended in production environments that all communication between a client and Snow Owl is performed through a secure connection.

    Snow Owl sends an HTTP 401 Unauthorized response if a request needs to be authenticated.

    hashtag
    Authorization

    If supported by the security realm, Snow Owl will also check whether an authenticated user is permitted to perform the requested action on a given resource.

    Within an organization, roles are created for various job functions. The permissions to perform certain operations are assigned to specific roles. Members, staff or other system users are assigned particular roles, and through those role assignments acquire the permissions needed to perform particular system functions. Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning appropriate roles to the user's account; this simplifies common operations, such as adding a user, or changing a user's department.

    hashtag
    Rules

    1. Role assignment: A subject can exercise a permission only if the subject has selected or been assigned a role.

    2. Permission authorization: A subject can exercise a permission only if the permission is authorized for the subject's active role.

    With rules 1 and 2, it is ensured that users can exercise only permissions for which they are authorized.

    S = Subject = A person or automated agent R = Role = Job function or title which defines an authority level P = Permissions = An approval of a mode of access to a resource

    hashtag
    Permissions

    In Snow Owl a permission is a single value that represents both the operation the user would like to perform and the resource that is being accessed. The format is the following: <operation>:<resource>

    Currently there are 7 operations supported by Snow Owl:

    • browse - read the contents of a resource

    • edit - write the contents of the resource, delete the resource

    • import - import from external content and formats

    • export - export to external content and formats

    • version - create a version in a Code System, create a release

    • promote - merge content from isolated branch environments to a Code System's development version

    • classify - run classifiers and save their results

    Resources represent the content that is being accessed by a client. A resource can be anything that can be resolved to a database entry. Currently, the following resource formats are allowed to be used in a permission:

    • <repositoryId> - access the entire content available in a terminology repository

    • <repositoryId>/<branch> - access the content available on a branch in a terminology repository

    • <codeSystemId> - access all content of a Code System, including both the latest development and all previous releases

    • <codeSystemId>/<versionId> - access a specific release of a Code System

    There is a special * wild card character that can be used for both the operation and resource parts in a permission value to allow any operation to be performed on any or selected resources, or to allow certain operations to be performed on any available resources.

    Examples:

    • browse:snomedStore - browse all SNOMED CT Code Systems and their content

    • edit:SNOMEDCT-UK-CL - edit the SNOMEDCT-UK-CL Code System

    • export:SNOMEDCT-US/2019-03-01 - export the 2019-03-01 US Extension release

    • *:SNOMEDCT - allow any operations to be performed on the SNOMEDCT Code System

    • browse:* - allow read operations on all available resources

    • *:* - administrator permission, the user can do anything with any of the available resources

    hashtag
    Configuring Authorization

    To configure authorization, please consult the security realm specific documentation:

    • Configure a file realm

    • Configure an LDAP realm

    SNOMED CT Simple Map Reference Sets with Concepts as referenced components
  • SNOMED CT Complex Map Reference Sets

  • SNOMED CT Extended Map Reference Sets

  • Snow Owl's generic Mapping Sets

  • hashtag
    $translate

    All concept map accessible via the /ConceptMap endpoints are considered when retrieving mappings (translations). The translate request's source that designates the source value set cannot be interpreted hence not used. With the exception of SNOMED CT where the standard URI is expected, our proprietary short name or component ids are used to designate the source/target code system.

    SNOMED CT:

    • Simple Map Type Reference Set mappings are considered equivalent in terms of their correlation

    • The availability and format of target code systems are not guaranteed, there is an ongoing conversation at SNOMED CT International to rectify this.

    curl 'http://localhost:8080/snowowl/snomed-ct/v3/MAIN/concepts?active=true&ecl=%3C&#33;138875005&limit=1'
    {
      "items": [
        {
          "id": "308916002",
          "released": true,
          "active": true,
          "effectiveTime": "20020131",
          "moduleId": "900000000000207008",
          "iconId": "138875005",
          "definitionStatus": "PRIMITIVE",
          "subclassDefinitionStatus": "NON_DISJOINT_SUBCLASSES"
        }
      ],
      "searchAfter": "AoE_BWVlYzI3Mjc0LTYyZTctNDg3NS05NmVlLThhNTk3OTcxOTJiNw==",
      "limit": 1,
      "total": 19
    }
    National Release Center (multiple maintained extensions) - SNOMEDCT-UK-CL, SNOMEDCT-UK-DR - United Kingdom Clinical and Drug Extensions, respectively
  • Care Provider with a special extension based on a national extension - SNOMEDCT-US-UNMC - University of Nebraska Medical Center's extension builds on top of the SNOMEDCT-US extension

  • getting started
    SNOMED CT International Edition 2020-01-31
    SNOMED CT US Extension 2019-09-01

    Reference Set API - to create, edit SNOMED CT Reference Sets

  • Reference Set Member APIarrow-up-right - to create, edit SNOMED CT Reference Set Members

  • Branching API
    Compare API
    Concept API
    Description API
    Relationship API
    Validation APIarrow-up-right
    Classification APIarrow-up-right
    SNOMED CT Extension Feature Branches
  • Ensure sufficient virtual memory

  • Ensure sufficient threads

  • Tweaking for performance

  • hashtag
    Configuring system settings

    Where to configure systems settings depends on which package you have used to install Snow Owl, and which operating system you are using.

    When using the .zip or .tar.gz packages, system settings can be configured:

    • temporarily with ulimit, or

    • permanently in /etc/security/limits.conf.

    When using the RPM or Debian packages, most system settings are set in the system configuration file. However, systems which use systemd require that system limits are specified in a systemd configuration file.

    hashtag
    ulimit

    On Linux systems, ulimit can be used to change resource limits on a temporary basis. Limits usually need to be set as root before switching to the user that will run Snow Owl. For example, to set the number of open file handles (ulimit -n) to 65,536, you can do the following:

    The new limit is only applied during the current session.

    You can consult all currently applied limits with ulimit -a.

    hashtag
    /etc/security/limits.conf

    On Linux systems, persistent limits can be set for a particular user by editing the /etc/security/limits.conf file. To set the maximum number of open files for the snowowl user to 65,536, add the following line to the limits.conf file:

    This change will only take effect the next time the snowowl user opens a new session.

    circle-info

    hashtag
    Ubuntu and limits.conf

    Ubuntu ignores the limits.conf file for processes started by init.d. To enable the limits.conf file, edit /etc/pam.d/su and uncomment the following line:

    hashtag
    Sysconfig file

    When using the RPM or Debian packages, system settings and environment variables can be specified in the system configuration file, which is located in:

    Package

    Location

    RPM

    /etc/sysconfig/snowowl

    Debian

    /etc/default/snowowl

    However, for systems which uses systemd, system limits need to be specified via systemd.

    hashtag
    Systemd configuration

    When using the RPM or Debian packages on systems that use systemd, system limits must be specified via systemd.

    The systemd service file (/usr/lib/systemd/system/snowowl.service) contains the limits that are applied by default.

    To override them, add a file called /etc/systemd/system/snowowl.service.d/override.conf (alternatively, you may run sudo systemctl edit snowowl which opens the file automatically inside your default editor). Set any changes in this file, such as:

    Once finished, run the following command to reload units:

    Disable swapping
    Increase file descriptors
    Current Version

    SNOMED CT API endpoints currently have version v3. You have to explicitly set the version of the API via path parameter. For example:

    hashtag
    Available resources and services

    • Branching

    • Compare

    • Commitsarrow-up-right

    /snowowl/snomed-ct/v3arrow-up-right
    Integrate
    Core API
    GitHub Issuesarrow-up-right
    Release
    Upgrade
    SNOMED CT My Extension based on the International Edition 2020-01-31
    SNOMED CT UNMC Extension 2019-10-31 extends SNOMED CT US 2019-09-01
    SNOMED CT RF2 Import APIarrow-up-right
    SNOMED CT International Edition 2020-01-31
    SNOMED CT US Edition 2020-03-01

    README

    hashtag
    arrow-up-right

    Snow Owl® is a highly scalable, open source terminology server with revision-control capabilities and collaborative authoring platform features. It allows you to store, search and author high volumes of terminology artifacts quickly and efficiently.

    arrow-up-rightarrow-up-rightarrow-up-rightarrow-up-rightarrow-up-rightarrow-up-rightarrow-up-right

    hashtag
    Introduction

    Features include:

    • Revision-controlled authoring

      • Maintains multiple versions (including unpublished and published) for each terminology artifact and provides APIs to access them all

      • Independent work branches offer work-in-process isolation, external business workflow integration and team collaboration

    hashtag
    Download

    • -

    • -

    • -

    circle-info

    This distribution only includes features licensed under the Apache 2.0 license. To get access to the full set of features, please contact .

    View the detailed release notes .

    Not the version you're looking for? View .

    hashtag
    Install and Run

    NOTE: You need to have a recent version of Java installed (Java 11+, https://adoptium.net/).

    Once you have downloaded the appropriate package:

    • Run bin/snowowl.sh on unix, or bin/snowowl.bat on windows

    • Run curl http://localhost:8080/snowowl/admin/info

    hashtag
    Learn Snow Owl

    hashtag
    Building from source

    Snow Owl uses Maven for its build system. In order to create a distribution, you will only need the installed.

    Simply run the following command in the cloned directory. The Maven Wrapper will automatically download the required Maven version (and store it under ~/.m2/wrapper/dists).

    The distribution packages can be found in the releng/com.b2international.snowowl.server.update/target folder, when the build is complete.

    To run the test cases, use the following command:

    hashtag
    Development

    These instructions will get Snow Owl up and running on your local machine for development and testing purposes.

    hashtag
    Prerequisites

    Snow Owl is an Equinox-OSGi based server. To develop plug-ins for Snow Owl you need to use Eclipse as IDE:

    • Download Eclipse IDE for Eclipse Committers 2020-09 package from here: https://www.eclipse.org/downloads/packages/release/2020-09/r/eclipse-ide-eclipse-committers

    Required Eclipse plug-ins (install the listed features via Help -> Install New Software...):

    Note: you may have to untick the Show only the latest versions of the available software checkbox to get older versions of a feature. Please use the exact version specified below, not the latest point release.

    • Xtext/Xtend (https://download.eclipse.org/modeling/tmf/xtext/updates/releases/2.23.0/)

      • Xtend IDE 2.23.0 (Xtext)

      • Xtext Complete SDK 2.23.0 (Xtext)

    hashtag
    Eclipse Preferences

    Make sure you have the following preferences enabled/disabled.

    • Plug-in development API baseline errors is set to Ignored (Preferences > Plug-in Development > API Baselines)

    • The Plugin execution not covered by lifecycle configuration: org.apache.maven.plugins:maven-clean-plugin:2.5:clean type of errors can be ignored or changed to Warnings in Preferences->Maven->Errors/Warnings.

    • Set the workspace encoding to UTF-8 (Preferences->General->Workspace)

    hashtag
    Git configuration

    • Make sure the Git line endings are set to input (Preferences->Team->Git->Configuration - add key if missing core.autocrlf = input)

    hashtag
    Maven Settings

    • Make sure the settings.xml in your ~/.m2/settings.xml location is updated with the content from the settings.xml from this repository's root folder.

    hashtag
    First steps

    1. Import all projects into your Eclipse workspace and wait for the build to complete

    2. Select all projects and hit Alt + F5 and trigger an update to all Maven projects manually (to download dependencies from Maven)

    3. Open the

    hashtag
    Contributing

    Please see for details.

    hashtag
    Versioning

    Our use . You can find a chronologically ordered list of notable changes in .

    hashtag
    License

    This project is licensed under the Apache 2.0 License. See for details and refer to for additional licensing notes and uses of third-party components.

    hashtag
    Acknowledgements

    In March 2015, generously licensed the Snow Owl Terminology Server components supporting SNOMED CT. They subsequently made the licensed code available to their and the global community under an open-source license.

    In March 2017, licensed the Snow Owl Terminology Server to support the mandatory adoption of SNOMED CT throughout all care settings in the United Kingdom by April 2020. In addition to driving the UK’s clinical terminology efforts by providing a platform to author national clinical codes, Snow Owl will support the maintenance and improvement of the dm+d drug extension which alone is used in over 156 million electronic prescriptions per month. Improvements to the terminology server made under this agreement will be made available to the global community.

    Many other organizations have directly and indirectly contributed to Snow Owl, including: Singapore Ministry of Health; American Dental Association; University of Nebraska Medical Center (USA); Federal Public Service of Public Health (Belgium); Danish Health Data Authority; Health and Welfare Information Systems Centre (Estonia); Department of Health (Ireland); New Zealand Ministry of Health; Norwegian Directorate of eHealth; Integrated Health Information Systems (Singapore); National Board of Health and Welfare (Sweden); eHealth Suisse (Switzerland); and the National Library of Medicine (USA).

    Extension Management

    hashtag
    Introduction

    The Snow Owl Terminology Server is capable of managing multiple SNOMED CT extensions for both distribution and authoring purposes in a single deployment. This guide describes the typical scenarios, like creating, managing, releasing and upgrading SNOMED CT Extensions in great detail with images. If you are unfamiliar with SNOMED CT Extensions, the next section walks you through their logical model and basic characteristics, while the following pages describe distribution and authoring scenarios as well as how to use the Snow Owl Terminology Server for SNOMED CT Extensions.

    Extensions and Snow Owl

    hashtag
    What is a SNOMED CT Extension?

    circle-info

    The official SNOMED CT Extension Practical Guide has been used to help produce the content available on this page:

    hashtag
    Common Structure

    SNOMED CT is a multilingual clinical terminology that covers a broad scope. However, some users may need additional concepts, relationships, descriptions or reference sets to support national, local or organizational needs.

    The extension mechanism allows SNOMED CT to be customized to address the terminology needs of a country or organization that are not met by the International Edition.

    A SNOMED CT Extension may contain components and/or derivatives (e.g. reference sets used to represent subsets, maps or language preferences). Since the international edition and all extensions share a common structure, the same application software can be used to enter, store and process information from different extensions. Similarly, reference sets can be constructed to refer to content from both the international release and extensions. The common structure also makes it easier for content developed by an extension producer to be submitted for possible inclusion in a National Edition or the International Edition.

    Therefore, a SNOMED CT Extension uses the same Release Format version 2 as the International Edition, they share a common structure and schema (see ).

    hashtag
    Namespace

    Extensions are managed by SNOMED International, and Members or Affiliate Licensees who have been issued a namespace identifier by SNOMED International. A namespace identifier is used to create globally unique SNOMED CT identifiers for each component (i.e. concept, description and relationship) within a Member or Affiliate extension. This ensures that references to extension concepts contained in health record data are unambiguous and can be clearly attributed to a specific issuing organization.

    A national or local extension uses a namespace identifier issued by SNOMED International to ensure that all extension components can be uniquely identified (across all extensions).

    Therefore, a SNOMED CT Extension uses a single namespace identifier to identify all core components in the SNOMED CT Extension (see ).

    hashtag
    Modules

    Every SNOMED CT Extension includes one or more modules, and each module contains either SNOMED CT components or reference sets (or both). Modules may be dependent on other modules. A SNOMED CT Edition includes the contents of a focus module together with the contents of all the modules on which it depends. This includes the modules in the International Edition and possibly other modules from a national and/or local extension.

    An edition is defined based on a single focus module. This focus module must be the most dependent module, in that the focus module is dependent on all the other modules in the edition.

    Therefore, a SNOMED CT Extension uses one or more modules to categorize the components into meaningful groups (see ).

    hashtag
    Language

    SNOMED CT extensions can support a variety of use cases, including:

    Translating SNOMED CT, for example

    • Adding terms used in a local language or dialect

    • Adding terms used by a specific user group, such as patient-friendly terms

    Therefore, an Extension can have its own language to support patient-friendly terms, local user groups, etc. (see ).

    hashtag
    Dependency

    A SNOMED CT extension is a set of components and reference set members that add to the SNOMED CT International Edition. An extension is created, structured, maintained and distributed in accordance with SNOMED CT specifications and guidelines. Unlike, the International Edition an extension is not a standalone terminology. The content in an extension depends on the SNOMED CT International Edition, and must be used together with the International Edition and any other extension module on which it depends.

    Therefore, a SNOMED CT Extension depends on the SNOMED CT International Edition directly or indirectly through another SNOMED CT Extension (see ).

    hashtag
    Versions

    A specific version of an extension can be referred to using the date on which the extension was published.

    There are many use cases that require a date specific version of an edition, including specifying the substrate of a SNOMED CT query, and specifying the version of SNOMED CT used to code a specific data element in a health record. A versioned edition includes the contents of the specified version of the focus module, plus the contents of all versioned modules on which the versioned focus module depends (as specified in the |Module dependency reference set|). The version of an edition is based on the date on which the edition was released. Many extension providers release their extensions as a versioned edition, using regular and predictable release cycles.

    Therefore, a SNOMED CT Extension can be versioned and have a different release cycle than the SNOMED CT International Edition (see ).

    hashtag
    Characteristics

    To summarize, a SNOMED CT Extension has the following characteristics:

    • Uses the same RF2 structure as the SNOMED CT International Edition

    • Uses a single namespace identifer to globally identify its content

    • Uses one or more modules to categorize the content into groups

    Now that we have a clear understanding of what SNOMED CT Extensions are, let's take a look at how can we use them in Snow Owl.

    Upgrading

    Maintenance of a SNOMED CT Extension is essential to ensure that

    • it incorporates changes requested by terminology consumers

    • it remains aligned with the SNOMED CT International Edition

    While both of these maintenance related tasks are potentially assigned to one of the upcoming Extension development cycles, there is a clear distinction between the two maintenance tasks.

    circle-info

    See additional Extension maintenance related material in the official .

    hashtag
    Change requests

    Changes requested by your terminology consumers are typically content authoring tasks that you would assign to an Extension authoring team. They usually come with a well-described problem you need to address in the terminology as you would do in the usual development cycle.

    See the section on how you can address change requests and incorporate them as regular tasks into the main version of your Extension.

    hashtag
    International Edition Changes

    Aligning content to the SNOMED CT International Edition is one of the main responsibilities of an Extension maintainer. However, keeping up with the changes introduced in SNOMED CT International Edition biannually (on January 31st and July 31st) can be an overwhelming task, especially if:

    • you are under pressure from your terminology consumers to make the requested changes ASAP, especially in mission critical scenarios.

    • the changes introduced in the International Edition are conflicting with your local changes and/or causing maintenance related issues after the upgrade.

    To address SNOMED CT International Edition upgrade tasks in a reliable and reproducible way, Snow Owl offers an upgrade flow for SNOMED CT Extensions.

    hashtag
    Upgrades

    A Code System upgrade in Snow Owl is a complex workflow with states and steps. The workflow involves a special Upgrade Code System, a series of automated migration processes and validation rules to ensure the quality and reliability of the operation. The upgrade can be done quickly if there were no conflicts between the Extension and the International Edition. However, updates can also be a long-running process spanning over many months when significant structural changes (e.g. in substances, anatomy, or modeling approach) are made in the International Edition.

    hashtag
    Starting the Upgrade

    In Snow Owl, SNOMED CT Extension are linked to their SNOMED CT dependency with the extensionOf property. This property describes the International Edition and its version that the Extension depends on. For example, the SNOMEDCT/2019-07-31 value specifies that our Extension depends on the 2019-07-31 version of the International Edition.

    Extension upgrades can be started when there is a new version available in the Extension/Edition we have selected as our dependency in the extensionOf property. When fetching a SNOMED CT Code System via the Code System API, Snow Owl will check if there are any upgrades availables and return them in the availableUpdates array property. If there are no upgrades available the array will be empty.

    To start an Extension upgrade to a newer International Edition (or to a newer Extension dependency version), you can use the . The only thing that needs to be specified there is the desired new version of the Extension's extensionOf dependency.

    When the upgrade is started, Snow Owl creates a special <codeSystemShortName>-UP-<newExtensionOf> (eg. SNOMEDCT-MYEXT-UP-SNOMEDCT-2020-01-31) Code System to allow authors and the automated processes to migrate the latest development version of the Extension to the new dependency.

    hashtag
    Regular Maintenance

    Regular daily Extension development tasks still need to be resolved and pushed somewhere in order to continue the development of the Extension, even if an upgrade process is in progress. Each Extension still has an active development version, even if an upgrade is in progress, which can be used to push daily maintenance changes and business as usual tasks.

    Changes pushed to the development area will regularly need to be synced with the upgrade until the upgrade completes, so the upgrade team will be able to resolve all remaining conflicts and issues.

    hashtag
    Upgrade Checks

    Upgrade Checks ensure the quality of the upgrade process and execute certain tasks/checks automatically. An Upgrade Check can be any logic or function to be run during the upgrade. Upgrade Checks can access the underlying upgrade Code System's content and report any issues (validation rules) or fix content automatically (migration rules). For example, a validation rule (like Active relationships must have active source, type, destination references) can be executed after each change pushed to the upgrade branch to verify whether there is any potentially invalid relationship left to fix or you are ready to go.

    hashtag
    Completing the Upgrade

    Once the upgrade authoring team is done with the necessary changes to align the Extension with the new International Edition and all the checks are completed successfully the upgrade can be completed. Completing the upgrade performs the following steps:

    • Creates a <codeSystemShortName>-DO-<previousExtensionOf> Code System to refer to the previous state of the Extension

    • Changes the current working branch of the Extension Code System to the branch that was used during the upgrade process

    • Deletes the <codeSystemShortName>-UP-<newExtensionOf>

    Reference Sets

    Two categories make up Snow Owl's Reference Set API:

    1. Reference Sets category to get, search, create and modify reference sets

    2. Reference Set Members category to get, search, create and modify reference set members

    Basic operations like create, update, delete are supported for both category.

    hashtag
    Actions API

    On top of the basic operations, reference sets and members support actions. Actions have an action property to specify which action to execute, the rest of the JSON properties will be used as body for the Action.

    Supported reference set actions are:

    1. sync - synchronize all members of a query type reference set by executing their query and comparing the results with the current members of their referenced target reference set

    Supported reference set member actions are:

    1. create - create a reference set member (uses the same body as POST /members)

    2. update - update a reference set member (uses the same body as PUT /members)

    3. delete - delete a reference set member

    For example the following will sync a query type reference set member's referenced component with the result of the reevaluated member's ESCG query

    hashtag
    Bulk API

    Members list of a single reference set can be modified by using the following bulk-like update endpoint:

    Input

    The request body should contain the commitComment property and a request array. The request array must contain actions (see Actions API) that are enabled for the given set of reference set members. Member create actions can omit the referenceSetId parameter, those will use the one defined as path parameter in the URL. For example by using this endpoint you can create, update and delete members of a reference set at once in one single commit.

    Compare

    hashtag
    Compare API

    Comparison for current terminology changes committed to a source or target branch can be conducted by creating a compare resource.

    A review identifier can be added to merge requests as an optional property. If the source or target branch state is different from the values captured when creating the review, the merge/rebase attempt will be rejected. This can happen, for example, when additional commits are added to the source or the target branch while a review is in progress; the review resource state becomes STALE in such cases.

    Reviews and concept change sets have a limited lifetime. CURRENT reviews are kept for 15 minutes, while review objects in any other states are valid for 5 minutes by default. The values can be changed in the server's configuration file.

    hashtag
    Compare two branches

    Response

    hashtag
    Read component state from comparison

    Terminology components (and in fact any content) can be read from any point in time by using the special path expression: {branch}@{timestamp}. To get the state of a SNOMED CT Concept from the previous comparison on the compareBranch at the returned compareHeadTimestamp, you can use the following request:

    Request

    Response

    To get the state of the same SNOMED CT Concept but on the base branch, you can use the following request:

    Request

    Response

    Additionally, if required to compute what's changed on the component since the creation of the task, it is possible to get back the base version of the changed component by using another special path expression: {branch}^.

    Request

    Response

    circle-info

    These characters are not URL safe characters, thus they must be encoded before sending the HTTP request.

    CodeSystem

    Code systems maintained within Snow Owl are exposed (read-only) via the endpoints /CodeSystem and /CodeSystem/{codeSystemId}. Supported concept properties are handled and returned if requested. The currently exposed code systems are:

    Snow Owl OSS:

    • SNOMED CT

    • Internal (FHIR) Code Systems (terminology subset)

    Snow Owl Pro:

    • ATC

    • ICD-10

    • LOINC

    hashtag
    SNOMED CT

    All standard and default SNOMED CT properties are supported, including the relationship type properties. In addition to the FHIR SNOMED CT properties, Snow Owl can return the effective time property, with the URI http://snomed.info/field/Concept.effectiveTime.

    hashtag
    Operations

    hashtag
    $lookup

    Both GET as well as POST HTTP methods are supported. Concepts are queried based on code, version, system or Coding. Designations are included as part of the response as well as supported concept properties when requested. No date parameter is supported.

    Example for looking up a code from the Issue-type FHIR code system:

    Example for looking up properties (inactive and method) of the latest version of a SNOMED CT procedure by method code:

    For SNOMED CT, all common and SNOMED CT properties are supported, including all active relationship types.

    hashtag
    $validate-code

    Both GET as well as POST HTTP methods are supported for all exposed terminologies. Example for validating a SNOMED CT code:

    hashtag
    $subsumes

    Both GET as well as POST HTTP methods are supported. Subsumption testing is supported for ICD-10 and SNOMED CT terminologies.

    Example for SNOMED CT (version 2018-01-31):

    Installing Snow Owl with Debian Package

    The Debian package for Snow Owl can be downloaded from the Downloads section. It can be used to install Snow Owl on any Debian-based system such as Debian and Ubuntu.

    hashtag
    Download and install

    hashtag

    Installing Snow Owl with RPM

    The RPM for Snow Owl can be downloaded from the Downloads section. It can be used to install Snow Owl on any RPM-based system such as OpenSuSE, SLES, Centos, Red Hat, and Oracle Enterprise.

    circle-info

    RPM install is not supported on distributions with old versions of RPM, such as SLES 11 and CentOS 5. Please see instead.

    hashtag

    Installing Snow Owl with Docker

    Snow Owl is also available as Docker images. The images use as the base image.

    A list of all published Docker images and tags is available at .

    These images are free to use under the Apache 2.0 license. They contain open source features only.

    hashtag
    Pulling the image

    API

    This describes the resources that make up the official Snow Owl® RESTful API.

    hashtag
    Media Types

    Custom media types are used in the API to let consumers choose the format of the data they wish to receive. This is done by adding one of the following types to the Accept header when you make a request. Media types are specific to resources, allowing them to change independently and support formats that other resources don’t.

    The most basic media types the API supports are:

    sudo su # Become `root`
    ulimit -n 65536 # Change the max number of open files
    su snowowl # Become the `snowowl` user in order to start Snow Owl
    snowowl  -  nofile  65536
    [Service]
    LimitMEMLOCK=infinity
    sudo systemctl daemon-reload
    GET /snomed-ct/v3/branches
    Concepts
    Descriptions
    Relationships
    RefSets
    Classification
    Import
    Exportarrow-up-right
    Representing language, dialect or specialty-specific term preferences is possible using a SNOMED CT extension. The logical design of SNOMED CT enables a single clinical idea to be associated with a range of terms or phrases from various languages, as depicted in Figure 3.1-1 below. In an extension, terms relevant for a particular country, speciality, hospital (or other organization) may be created, and different options for term preferences may be specified. Even within the same country, different regional dialects or specialty-specific languages exist may influence which synonyms are preferred. SNOMED CT supports this level of granularity for language preferences at the national or local level.
    Uses one or more languages to support specific user groups and patient-friendly terms
  • Depends on the SNOMED CT International Edition

  • Uses versions (effective times) to identity its content across multiple releases

  • https://confluence.ihtsdotools.org/display/DOCEXTPGarrow-up-right
    Release Format 2 specificationarrow-up-right
    Namespace identifierarrow-up-right
    Modulesarrow-up-right
    Purposearrow-up-right
    Extensionsarrow-up-right
    Versionsarrow-up-right

    sync - synchronize a single member by executing the query and comparing the results with the current members of the referenced target reference set

    OPCS
  • Local Code Systems

  • application/json;charset=UTF-8 (default)

  • text/plain;charset=UTF-8

  • text/csv;charset=UTF-8

  • application/octet-stream (for file downloads)

  • multipart/form-data (for file uploads)

  • The generic JSON media type (application/json) is available as well, but we encourage you to explicitly set the accepted content type before sending your request.

    hashtag
    Schema

    All data is sent and received as JSON. Blank fields are omitted instead of being included as null.

    All non-effective time timestamps are returned in ISO 8601 format:

    Effective Time values are sent and received in short format:

    hashtag
    Hypermedia

    All POST requests return Location headers pointing to the created resource instead of including either the identifier or the entire created resource in the response body. These are meant to provide explicit URLs so that proper API clients don’t need to construct URLs on their own. It is highly recommended that API clients use these. Doing so will make future upgrades of the API easier for developers. All URLs are expected to be proper RFC 6570 URI templates.

    Example Location Header:

    hashtag
    Pagination

    Requests that return multiple items will be paginated to 50 items by default. You can request further pages with the searchAfter query parameter.

    hashtag
    Link/Resource expansion

    Where applicable, the expand query parameter will include nested objects in the response, to avoid having to issue multiple requests to the server.

    Expanded properties should be followed by parentheses and separated by commas; any options for the expanded property should be given within the parentheses, including properties to expand. Typical values for parameters are given in the "Implementation Notes" section of each endpoint.

    Response:

    hashtag
    Client Errors

    There are three possible types of client errors on API calls that receive request bodies:

    hashtag
    Invalid JSON

    hashtag
    Valid JSON but invalid representation

    hashtag
    Conflicts

    hashtag
    Server Errors

    In certain circumstances, Snow Owl might fail to process and respond to a request and responds with a 500 Internal Server Error.

    To troubleshoot these please examine the log files at {SERVER_HOME}/serviceability/logs/log.log and/or raise an issue on GitHubarrow-up-right.

    # session    required   pam_limits.so
    POST /members/:id/actions
    {
      "commitComment": "Sync member's target reference set",
      "action": "sync"
    }
    PUT /:path/refsets/:id/members
    {
      "commitComment": "Updating members of my simple type reference set",
      "requests": [
          {
            "action": "create|update|delete|sync",
            "action-specific-props": ...
          }
      ]
    }
    POST /compare 
    {
      "baseBranch": "MAIN",
      "compareBranch": "MAIN/a",
      "limit": 100
    }
    Status: 200 OK
    {
      "baseBranch": "MAIN",
      "compareBranch": "MAIN/a",
      "compareHeadTimestamp": 1567282434400,
      "newComponents": [],
      "changedComponents": ["138875005"],
      "deletedComponents": [],
      "totalNew": 0,
      "totalChanged": 1,
      "totalDeleted": 0
    }
    GET /snomed-ct/v3/MAIN@1567282434400/concepts/138875005
    Status: 200 OK
    {
      "id": "138875005",
      ...
    }
    GET /snomed-ct/v3/MAIN/concepts/138875005
    Status: 200 OK
    {
      "id": "138875005",
      ...
    }
    GET /snomed-ct/v3/MAIN/a^/concepts/138875005
    Status: 200 OK
    {
      "id": "138875005",
      ...
    }
     /CodeSystem/$lookup?system=http://hl7.org/fhir/issue-type&code=login&_format=json
     /CodeSystem/$lookup?system=http://snomed.info/sct&code=128927009&_format=json&property=inactive&property=http://snomed.info/id/260686004
    /CodeSystem/SNOMEDCT/2020-02-04/$validate-code?code=128927009
    /CodeSystem/$subsumes?codeA=409822003&codeB=264395009&system=http://snomed.info/sct/900000000000207008/version/20180131
    YYYY-MM-DDTHH:MM:SSZ
    yyyyMMdd
    http://example.com/snowowl/snomed-ct/v3/MAIN/concepts/123456789
    GET /snowowl/snomed-ct/v3/MAIN/concepts?offset=0&limit=50&expand=fsn(),descriptions()
    {
      "items": [
        {
          "id": "100005",
          "released": true,
          ...
          "fsn": {
            "id": "2709997016",
            "term": "SNOMED RT Concept (special concept)",
            ...
          },
          "descriptions": {
            "items": [
              {
                "id": "208187016",
                "released": true,
                ...
              },
            ],
            "offset": 0,
            "limit": 5,
            "total": 5
          }
        },
        ...
      ],
      "offset": 0,
      "limit": 50,
      "total": 421657
    }
    Status: 400 Bad Request
    {
      "status" : "400",
      "message" : "Invalid JSON representation",
      "developerMessage" : "detailed information about the error for developers"
    }
    Status: 400 Bad Request 
    {
      "status" : "400",
      "message" : "2 Validation errors",
      "developerMessage" : "Input representation syntax or validation errors. Check input values.",
      "violations" : ["violation_message_1", "violation_message_2"]
    }
    Status: 409 Conflict 
    {
      "status" : "409",
      "message" : "Cannot merge source 'branch1' into target 'MAIN'."
    }
    Status: 500 Internal Server Error 
    {
      "status" : "500",
      "message" : "Something went wrong during the processing of your request.",
      "developerMessage" : "detailed information about the error for developers"
    }

    SNOMED CT and others

    • SNOMED CT terminology support

      • RF2 Release File Specification as of 2021-07-31

      • 🆕 Support for Relationships with concrete values

      • Official and Custom Reference Sets

      • Expression Constraint Language v1.4.0 ,

      • Compositional Grammar 2.3.1 ,

      • Expression Template Language 1.0.0 ,

      • Query Language Draft 0.1.0 ,

    • With its modular design, the server can maintain multiple terminologies (including local codes, mapping sets, value sets)

  • Various set of APIs

    • SNOMED CT API (RESTful and native Java API)

    • FHIR API R4 v4.0.1 specarrow-up-right

    • CIS API 1.0

  • Highly extensible and configurable

    • Simple to use plug-in system makes it easy to develop and add new terminology tooling/API or any other functionality

  • Built on top of Elasticsearcharrow-up-right (highly scalable, distributed, open source search engine)

    • Connect to your existing cluster or use the embedded instance

    • All the power of Elasticsearch is available (full-text search support, monitoring, analytics and many more)

  • DEBarrow-up-right - shaarrow-up-right

    Navigate to http://localhost:8080/snowowl
  • See SNOMED CT API docsarrow-up-right, FHIR API docsarrow-up-right

  • FHIR API
  • SNOMED CT API

  • Admin API

  • MWE2 (https://download.eclipse.org/modeling/emft/mwe/updates/releases/2.11.3/)

    • MWE SDK 1.5.3 (MWE)

    • MWE 2 language SDK 2.11.3 (MWE)

  • Groovy Development Tools (https://groovy.jfrog.io/artifactory/plugins-release-local/org/codehaus/groovy/groovy-eclipse-integration/3.9.0/e4.17/org.codehaus.groovy-3.9.0.v202009301344-e2009-RELEASE-updatesite.zip)

    • Eclipse Groovy Development Tools - 3.9.0

    • Groovy-Eclipse M2E integration - 3.9.0

    • Groovy Compiler 2.5 - 3.9.0

  • M2Eclipse (https://archive.eclipse.org/technology/m2e/releases/1.17.2/)

    • m2e 1.17.2

    • m2e PDE 1.17.2

  • Set the line endings to Unix style (Preferences->General->Workspace)

  • target-platform/target-platform-local.target
    file
  • Wait until Eclipse resolves the target platform (click on the Resolve button if it refuses to do so) and then click on Set as Active Target platform

  • Wait until the build is complete and you have no compile errors

  • Launch snow-owl-oss launch configuration in the Run Configurations menu

  • Navigate to http://localhost:8080/snowowl

  • WINDOWSarrow-up-right
    shaarrow-up-right
    MACOS/LINUXarrow-up-right
    shaarrow-up-right
    RPMarrow-up-right
    shaarrow-up-right
    B2i Healthcareenvelope
    herearrow-up-right
    past releasesarrow-up-right
    Getting Started
    Set up Snow Owl
    Configuring Snow Owl
    Java Development Kit - 11arrow-up-right
    CONTRIBUTING.mdarrow-up-right
    releasesarrow-up-right
    semantic versioningarrow-up-right
    CHANGELOG.mdarrow-up-right
    LICENSEarrow-up-right
    NOTICEarrow-up-right
    arrow-up-right
    SNOMED Internationalarrow-up-right
    membersarrow-up-right
    NHS Digitalarrow-up-right
    Snow Owl Logo
    Code System, which marks the upgrade complete, and the upgrade itself cannot be accessed anymore.
    Extensions Practical Guidearrow-up-right
    Extension Development
    Upgrade APIarrow-up-right
    SNOMEDCT-MYEXT can be upgraded to SNOMEDCT/2020-01-31
    SNOMEDCT-MYEXT is upgrading to SNOMEDCT/2020-01-31
    SNOMEDCT-MYEXT is being developed and upgraded at the same time
    SNOMEDCT-MYEXT has been upgraded to International Edition 2020-01-31
    Running Snow Owl with SysV init

    Use the update-rc.d command to configure Snow Owl to start automatically when the system boots up:

    Snow Owl can be started and stopped using the service command:

    If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/.

    hashtag
    Running Snow Owl with systemd

    To configure Snow Owl to start automatically when the system boots up, run the following commands:

    Snow Owl can be started and stopped as follows:

    These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/.

    hashtag
    Checking that Snow Owl is running

    You can test that your Snow Owl instance is running by sending an HTTP request to:

    which should give you a response something like this:

    hashtag
    Configuring Snow Owl

    Snow Owl defaults to using /etc/snowowl for runtime configuration. The ownership of this directory and all files in this directory are set to root:snowowl on package installation and the directory has the setgid flag set so that any files and subdirectories created under /etc/snowowl are created with this ownership as well (e.g., if a keystore is created using the keystore tool). It is expected that this be maintained so that the Snow Owl process can read the files under this directory via the group permissions.

    Snow Owl loads its configuration from the /etc/snowowl/snowowl.yml file by default. The format of this config file is explained in Configuring Snow Owl.

    circle-info

    NOTE: Distributions that use systemd require that system resource limits be configured via systemd rather than via the /etc/sysconfig/snowowl file.

    hashtag
    Directory layout of Debian package

    The Debian package places config files, logs, and the data directory in the appropriate locations for a Debian-based system:

    Type

    Description

    Default Location

    Setting

    home

    Snow Owl home directory or $SO_HOME

    /usr/share/snowowl

    hashtag
    Next steps

    You now have a test Snow Owl environment set up. Before you start serious development or go into production with Snow Owl, you must do some additional setup:

    • Learn how to configure Snow Owl.

    • Configure important Snow Owl settings.

    • Configure important system settingsarrow-up-right.

    Download and install
    circle-info

    On systemd-based distributions, the installation scripts will attempt to set kernel parameters (e.g., vm.max_map_count); you can skip this by masking the systemd-sysctl.service unit.

    hashtag
    Running Snow Owl with SysV init

    Use the chkconfig command to configure Snow Owl to start automatically when the system boots up:

    Snow Owl can be started and stopped using the service command:

    If Snow Owl fails to start for any reason, it will print the reason for failure to STDOUT. Log files can be found in /var/log/snowowl/.

    hashtag
    Running Snow Owl with systemd

    To configure Snow Owl to start automatically when the system boots up, run the following commands:

    Snow Owl can be started and stopped as follows:

    These commands provide no feedback as to whether Snow Owl was started successfully or not. Instead, this information will be written in the log files located in /var/log/snowowl/.

    hashtag
    Checking that Snow Owl is running

    You can test that your Snow Owl instance is running by sending an HTTP request to:

    which should give you a response something like this:

    hashtag
    Configuring Snow Owl

    Snow Owl defaults to using /etc/snowowl for runtime configuration. The ownership of this directory and all files in this directory are set to root:snowowl on package installation and the directory has the setgid flag set so that any files and subdirectories created under /etc/snowowl are created with this ownership as well (e.g., if a keystore is created using the keystore tool). It is expected that this be maintained so that the Snow Owl process can read the files under this directory via the group permissions.

    Snow Owl loads its configuration from the /etc/snowowl/snowowl.yml file by default. The format of this config file is explained in Configuring Snow Owl.

    hashtag
    Directory layout of RPM

    The RPM places config files, logs, and the data directory in the appropriate locations for an RPM-based system:

    Type

    Description

    Default Location

    Setting

    home

    Snow Owl home directory or $SO_HOME

    /usr/share/snowowl

    hashtag
    Next steps

    You now have a test Snow Owl environment set up. Before you start serious development or go into production with Snow Owl, you must do some additional setup:

    • Learn how to configure Snow Owl.

    • Configure important Snow Owl settings.

    • Configure important system settingsarrow-up-right.

    Install Snow Owl with .zip or .tar.gz
    Obtaining Snow Owl for Docker is as simple as issuing a docker pull command against the Docker Hub registry.

    hashtag
    Running Snow Owl from the command line

    hashtag
    Development mode

    Snow Owl can be quickly started for development or testing use with the following command:

    hashtag
    Production mode

    circle-info

    The vm.max_map_count kernel setting needs to be set to at least 262144 permanently in /etc/sysctl.conf for production use. To apply the setting on a live system type: sysctl -w vm.max_map_count=262144

    The following example brings up Snow Owl instance with its dedicated Elasticsearch node. To bring up the cluster, use the docker-compose.ymlarrow-up-right and just type:

    circle-info

    docker-compose is not pre-installed with Docker on Linux. Instructions for installing it can be found on the Docker Compose webpagearrow-up-right.

    The node snowowl listens on localhost:8080 while it talks to the elasticsearch node over a Docker network.

    To stop the cluster, type docker-compose down. Data volumes/mounts will persist, so it's possible to start the stack again with the same data using docker-compose up`.

    hashtag
    Configuring Snow Owl with Docker

    Snow Owl loads its configuration from files under /usr/share/snowowl/config/. These configuration files are documented in the Configure Snow Owl pages.

    The image offers several methods for configuring Snow Owl settings with the conventional approach being to provide customized files, that is to say, snowowl.yml. It's also possible to use environment variables to set options:

    • A. Bind-mounted configuration

      Create your custom config file and mount this over the image's corresponding file.

      For example, bind-mounting a custom_snowowl.yml with docker run can be

      accomplished with the parameter:

    circle-info

    The container runs Snow Owl as user snowowl using uid:gid 1000:1000. Bind mounted host directories and files, such as custom_snowowl.yml above, need to be accessible by this user. For the mounted data and log dirs, such as /usr/share/snowowl/resources, write access is required as well.

    • B. Customized image

      In some environments, it may make more sense to prepare a custom image containing

      your configuration. A Dockerfile to achieve this may be as simple as:

    You could then build and try the image with something like:

    hashtag
    Notes for production use and defaults

    We have collected a number of best practices for production use. Any Docker parameters mentioned below assume the use of docker run.

    By default, Snow Owl runs inside the container as user snowowl using uid:gid 1000:1000.

    • If you are bind-mounting a local directory or file, ensure it is readable by

      this user, while the <> additionally require

      write access. A good strategy is to grant group access to gid 1000 or 0 for

      the local directory. As an example, to prepare a local directory for storing

      data through a bind-mount:

    • It is important to ensure increased ulimits for nofile

      and nproc are available for the Snow Owl containers.

      Verify the init systemarrow-up-right

      for the Docker daemon is already setting those to acceptable values and, if

      needed, adjust them in the Daemon, or override them per container, for example

      using docker run:

    NOTE: One way of checking the Docker daemon defaults for the aforementioned ulimits is by running:

    • Swapping needs to be disabled for performance and stability. This can be achieved through any of the methods mentioned in the system settings.

    • The image exposesarrow-up-right TCP ports 8080 and 2036.

    • Use the SO_JAVA_OPTS environment variable to set heap size. For example, to use 16GB use SO_JAVA_OPTS="-Xms16g -Xmx16g" with docker run.

    • Pin your deployments to a specific version of the Snow Owl OSS Docker image. For example, snow-owl-oss:7.2.0.

    • Consider centralizing your logs by using a different driver]. Also note that the default json-file logging driver is not ideally suited for production use.

    centos:7arrow-up-right
    Docker Hubarrow-up-right

    Installing Snow Owl with .zip or .tar.gz

    Snow Owl is provided as a .zip and as a .tar.gz package. These packages can be used to install Snow Owl on any system and are the easiest package format to use when trying out Snow Owl.

    The latest stable version of Snow Owl can be found on the Snow Owl Releasesarrow-up-right page.

    circle-info

    Snow Owl requires Java 11 or newer version. Use the official Oracle distribution or an open-source distribution such as OpenJDK.

    hashtag
    Download and install the zip package

    The .zip archive for Snow Owl can be downloaded and installed as follows:

    hashtag
    Download and install the .tar.gz package

    The .tar.gz archive for Snow Owl can be downloaded and installed as follows:

    hashtag
    Running Snow Owl from the command line

    Snow Owl can be started from the command line as follows:

    By default, Snow Owl runs in the foreground, prints its logs to the standard output (stdout), and can be stopped by pressing Ctrl-C.

    circle-info

    All scripts packaged with Snow Owl assume that Bash is available at /bin/bash. As such, Bash should be available at this path either directly or via a symbolic link.

    hashtag
    Checking that Snow Owl is running

    You can test that your instance is running by sending an HTTP request to Snow Owl's status endpoint:

    which should give you a response like this:

    hashtag
    Running in the background

    You can send the Snow Owl process to the background using the combination of nohup and the & character:

    Log messages can be found in the $SO_HOME/serviceability/logs/ directory.

    To shut down Snow Owl, you can kill the process ID directly:

    or using the provided shutdown script:

    hashtag
    Directory layout of .zip and .tar.gz archives:

    The .zip and .tar.gz packages are entirely self-contained. All files and directories are, by default, contained within $SO_HOME — the directory created when unpacking the archive.

    This is very convenient because you don’t have to create any directories to start using Snow Owl, and uninstalling Snow Owl is as easy as removing the $SO_HOME directory. However, it is advisable to change the default locations of the config directory, the data directory, and the logs directory so that you do not delete important data later on.

    hashtag
    Next steps

    You now have a test Snow Owl environment set up. Before you start serious development or go into production with Snow Owl, you must do some additional setup:

    • Learn how to .

    • Configure .

    • Configure .

    Configuring an LDAP realm

    You can configure security to communicate with a Lightweight Directory Access Protocol (LDAP) server to authenticate and authorize users.

    To integrate with LDAP, you configure an ldap realm in the snowowl.yml configuration file.

    hashtag
    Configuration

    The following configuration settings are supported:

    The default configuration values are selected to support both OpenLDAP and Active Directory without needing to customize the default schema that comes with their default installation.

    hashtag
    Configure Authentication

    When users send their username and password with their request in the Authorization header, the LDAP security realm performs the following steps to authenticate the user: 1. Searches for a user entry in the configured baseDn to get the DN 2. Authenticates with the LDAP instance using the received DN and the provided password

    If any of the above-mentioned steps fails for any reason, the user is not allowed to access the terminology server's content and the server will respond with HTTP 401 Unauthorized.

    To configure authentication, you need to configure the uri, baseDn, bindDn, bindDnPassword, userObjectClass and userIdProperty configuration settings.

    hashtag
    Adding a user

    To add a user in the LDAP realm, create an entry under the specified baseDn using the configured userObjectClass as class and the userIdProperty as the property where the user's username/e-mail address is configured.

    Example user entry:

    hashtag
    Configure Authorization

    On top of the authentication part, the LDAP realm provides configuration values to support full role-based access control and authorization.

    When a user's request is successfully authenticated with the LDAP realm, Snow Owl authorizes the request using the user's currently set roles and permissions in the configured LDAP instance.

    hashtag
    Adding a role

    To add a role in the LDAP realm, create an entry under the specified baseDn using the configured roleObjectClass as class and the configured permissionProperty and memberProperty properties for permission and user mappings, respectively.

    Example read-only role:

    Migrate from 6.x

    The following major differences, features and topics are worth mentioning when comparing features present in Snow Owl 6 and 7 and migrating an existing 6.x deployment to Snow Owl 7.x.

    NOTE: It is highly recommended to keep the previous Snow Owl 6 deployment up and running until you have the data and all connected services migrated to the new version successfully. The new Snow Owl 7 system should get its own dedicated machine and deployment environment. Rolling back to the previous state should be available and must be executed when the upgrade cannot be performed successfully.

    hashtag
    Java 11

    From Snow Owl 7.1, Snow Owl compiles and runs on Java 11+ versions. It is recommended to use the latest OpenJDK or OracleJDK 11 LTS version. Install from , or . The Oracle JDK comes with commercial usage restrictions, see before installing.

    hashtag
    RDBMS vs Elasticsearch

    While Snow Owl 6 was relying on two data sources for reading and writing data, a primary RDBMS (MySQL) for writing and a secondary Elasticsearch index for full-text search, queries and quick access, Snow Owl 7 in the other hand requires only a single data source, an Elasticsearch cluster.

    If you were using an external Elasticsearch cluster then we recommended installing the new Elasticsearch 7.x version first, then installing Snow Owl 7.x and finally connecting the two (or using the appropriate Docker images). If you were using the embedded version, then installing the new Snow Owl 7 version is enough.

    After the migration, the MySQL software dependency can be uninstalled from the machine if there are no other services depending on it.

    hashtag
    Database content

    Due to schema changes the old content present in the RDBMS and index cannot be used by a Snow Owl 7 installation. To migrate an existing dataset to the new version, perform an export in the old system and use the exported files to import the content back into the new Snow Owl 7 version.

    hashtag
    LDAP Authorization

    The new Snow Owl 7 version comes with complete authorization support using JWT authorization tokens. The old User - Role - Permission system can be used by performing the following migration steps:

    1. Add the administrator permission to all administrator roles: *:*

    2. Remove the unused permission values from all roles used by Snow Owl

    3. Add the classify:*

    hashtag
    Configuration changes

    Snow Owl 7 configuration file has been renamed to snowowl.yml (from snowowl_config.yml) and moved to the <HOME>/configuration folder.

    The following configuration settings have been changed:

    • repository.database configuration setting has been removed completely

    • repository.numberOfWorkers has been renamed to repository.maxThreads and its default value became 200.

    Apply these changes to the configuration before starting your Snow Owl Terminology Server.

    hashtag
    Startup and shutdown

    The old startup.bat, startup.sh, shutdown.bat, shutdown.sh have been replaced with the new snowowl.sh, snowowl.bat and shutdown.sh scripts.

    hashtag
    Packaging

    Snow Owl 7 comes in four distribution formats:

    • zip/tar.gz for manual deployments

    • rpm for CentOS/RHEL based Linux system deployments

    Branching

    Snow Owl provides branching support for terminology repositories. In each repository there is an always existing and UP_TO_DATE branch called MAIN. The MAIN branch represents the latest working version of your terminology (similar to a master branch on GitHub).

    You can create your own branches and create/edit/delete components and other resources on them. Branches are identified with their full path, which should always start with MAIN. For example the branch MAIN/a/b/c/d represents a branch under the parent MAIN/a/b/c with name d

    ValueSet

    hashtag
    ValueSet API

    The endpoints /ValueSet and /ValueSet/{valueSetId} and corresponding operations expose the following types of terminology resources:

    ./mvnw clean package
    ./mvnw clean verify
    wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.deb
    wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.deb.sha512
    shasum -a 512 -c snow-owl-oss-<version>.deb.sha512 # Compares the SHA of the downloaded Debian package and the published checksum, which should output `snow-owl-oss-<version>.deb: OK`.
    sudo dpkg -i snow-owl-oss-<version>.deb
    sudo update-rc.d snowowl defaults 95 10
    sudo -i service snowowl start
    sudo -i service snowowl stop
    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable snowowl.service
    sudo systemctl start snowowl.service
    sudo systemctl stop snowowl.service
    curl http://localhost:8080/snowowl/admin/info
    {
      "version": "7.2.0",
      "description": "You Know, for Terminologies",
      "repositories": {
        "items": [
          {
            "id": "snomedStore",
            "health": "GREEN"
          }
        ]
      }
    }
    wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.rpm
    wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.rpm.sha512
    shasum -a 512 -c snow-owl-oss-<version>.rpm.sha512 # Compares the SHA of the downloaded RPM and the published checksum, which should output `snow-owl-oss-<version>.rpm: OK`.
    sudo rpm --install snow-owl-oss-<version>.rpm
    sudo chkconfig --add snowowl
    sudo -i service snowowl start
    sudo -i service snowowl stop
    sudo /bin/systemctl daemon-reload
    sudo /bin/systemctl enable snowowl.service
    sudo systemctl start snowowl.service
    sudo systemctl stop snowowl.service
    curl http://localhost:8080/snowowl/admin/info
    {
      "version": "7.2.0",
      "description": "You Know, for Terminologies",
      "repositories": {
        "items": [
          {
            "id": "snomedStore",
            "health": "GREEN"
          }
        ]
      }
    }
    docker pull snow-owl-oss:latest
    docker run -p 8080:8080 snow-owl-oss:latest
    docker-compose up
    -v full_path_to/custom_snowowl.yml:/usr/share/snowowl/configuration/snowowl.yml
    FROM snow-owl-oss:{version}
    COPY --chown=snowowl:snowowl snowowl.yml /usr/share/snowowl/configuration/
    docker build --tag=snow-owl-oss-custom .
    docker run -ti -v /usr/share/snowowl/resources snow-owl-oss-custom
      mkdir sodatadir
      chmod g+rwx sodatadir
      chgrp 1000 sodatadir
      --ulimit nofile=65535:65535
      docker run --rm centos:7 /bin/bash -c 'ulimit -Hn && ulimit -Sn && ulimit -Hu && ulimit -Su'
    identity:
      providers:
        - ldap:
            uri: <ldap_uri>
            bindDn: cn=admin,dc=snowowl,dc=b2international,dc=com
            bindDnPassword: <adminpwd>
            baseDn: dc=snowowl,dc=b2international,dc=com
            roleBaseDn: {baseDn}
            userFilter: (objectClass={userObjectClass})
            roleFilter: (objectClass={roleObjectClass})
            userObjectClass: inetOrgPerson
            roleObjectClass: groupOfUniqueNames
            userIdProperty: uid
            permissionProperty: description
            memberProperty: uniqueMember
            usePool: false
    SNOMED CT Simple Type Reference Sets with Concepts as referenced components.
  • SNOMED CT Query Type Reference Sets with ECL expressions (each member is a Value Set)

  • Snow Owl's generic Value Sets

  • Delete and create operations are not implemented.

    hashtag
    $expand

    All value sets accessible via the /ValueSet endpoints can be expanded.

    For SNOMED CT URIs, implicit value sets are supported:

    • ?fhir_vs - all Concept IDs in the edition/version. If the base URI is http://snomed.info/sctarrow-up-right, this means all possible SNOMED CT concepts

    • ?fhir_vs=isa/[sctid] - all concept IDs that are subsumed by the specified Concept.

    • ?fhir_vs=refset - all concept ids that correspond to real references sets defined in the specified SNOMED CT edition

    • ?fhir_vs=refset/[sctid] - all concept IDs in the specified reference set

    The in-parameters are not yet supported.

    hashtag
    $validate-code

    Codes can be validated against a given Value Set specified by the value set's logical id or canonical URL. In terms of Snow Owl terminology components, codes are validated against:

    • SNOMED CT Simple Type Reference Sets with Concepts as referenced components.

    • SNOMED CT Query Type Reference Sets with ECL expressions (each member is a Value Set)

    • Snow Owl's generic Value Sets

    Validation performs the following checks:

    • The existence of the given Value Set (error if not found)

    • The existence of the reference in the existing Value Set to the given code (error if not found)

    • The existence of the given code in the system (error if not found)

    • Potential version mismatch (_error if the reference points to a version that is different to the code's version)

    • The status of the given code and reference (warning if code is inactive while reference is active)

    For SNOMED CT URIs, implicit value sets are supported:

    • ?fhir_vs - all Concept IDs in the edition/version. If the base URI is http://snomed.info/sctarrow-up-right, this means all possible SNOMED CT concepts

    • ?fhir_vs=isa/[sctid] - all concept IDs that are subsumed by the specified Concept.

    • ?fhir_vs=refset - all concept ids that correspond to real references sets defined in the specified SNOMED CT edition

    • ?fhir_vs=refset/[sctid] - all concept IDs in the specified reference set

    bin

    Binary scripts including startup/shutdown to start/stop the instance

    /usr/share/snowowl/bin

    conf

    Configuration files including snowowl.yml

    /etc/snowowl

    SO_PATH_CONF

    data

    The location of the data files and resources.

    /var/lib/snowowl

    path.data

    logs

    Log files location.

    /var/log/snowowl

    bin

    Binary scripts including startup/shutdown to start/stop the instance

    /usr/share/snowowl/bin

    conf

    Configuration files including snowowl.yml

    /etc/snowowl

    SO_PATH_CONF

    data

    The location of the data files and resources.

    /var/lib/snowowl

    path.data

    logs

    Log files location.

    /var/log/snowowl

    https://docs.docker.com/engine/admin/logging/overview/[loggingarrow-up-right
    permission declaration and assign it to all roles that should be able to run classifications

    metrics settings has been renamed to monitoring

    deb for Debian based Linux system deployments
  • docker for Docker based deployments

  • OpenJDKarrow-up-right
    Oraclearrow-up-right
    AdoptJDKarrow-up-right
    herearrow-up-right

    baseDn

    The base directory where all entries in the entire subtree will be considered as potential matches for all searches.

    roleBaseDn

    Alternative base directory where all role entries in the entire subtree will be considered. Defaults to the baseDn value.

    userFilter

    The search filter to search for user entries under the configured baseDn. Defaults to (objectClass={userObjectClass}).

    roleFilter

    The search filter to search for role entries under the configured roleBaseDn. Defaults to (objectClass={roleObjectClass}).

    userObjectClass

    The user object's class to look for when searching for user entries. Defaults to inetOrgPerson class.

    roleObjectClass

    The role object's class to look for when searching for role entries. Defaults to groupOfUniqueNames class.

    userIdProperty

    The userId property to access and read for the user's unique identifier. Usually their username or email address. Defaults to uid property.

    permissionProperty

    A multi-valued property that is used to store permission information on a role. Defaults to the description property.

    memberProperty

    A multi-valued property that is used to store and retrieve user dns that belong to a given role. Defaults to the uniqueMember property.

    Configuration

    Description

    uri

    The LDAP URI that points to the LDAP/AD server to connect to.

    bindDn

    The user's DN who has access to the entire baseDn and roleBaseDn and can read content from it.

    bindDnPassword

    The password of the bindDn user.

    .

    Later you can decide to either delete the branch or merge the branch back to its parent. To properly merge a branch back into its parent, sometimes it is required to rebase (synchronize) it first with its parent to get the latest changes. This can be decided via the state attribute of the branch, which represents the current state compared to its parent state.

    hashtag
    Branch states

    There are five different branch states available:

    1. UP_TO_DATE - the branch is up-to-date with its parent there are no changes neither on the branch or on its parent

    2. FORWARD - the branch has at least one commit while the parent is still unchanged. Merging a branch requires this state, otherwise it will return a HTTP 409 Conflict.

    3. BEHIND - the parent of the branch has at least one commit while the branch is still unchanged. The branch can be safely rebased with its parent.

    4. DIVERGED - both parent and branch have at least one commit. The branch must be rebased first before it can be safely merged back to its parent.

    5. STALE - the branch is no longer in relation with its former parent, and should be deleted.

    circle-info

    Snow Owl supports merging of unrelated (STALE) branches. So branch MAIN/a can be merged into MAIN/b, there does not have to be a direct parent-child relationship between the two branches.

    hashtag
    Basics

    hashtag
    Get a branch

    Response

    hashtag
    Get all branches

    Response

    hashtag
    Create a branch

    Input

    Response

    hashtag
    Delete a branch

    Response

    hashtag
    Merging

    hashtag
    Perform a merge

    Input

    Response

    hashtag
    Perform a rebase

    Input

    Response

    hashtag
    Monitor progress of a merge or rebase

    Response

    hashtag
    Remove merge or rebase queue item

    Response

    dn: cn=John [email protected],dc=snowowl,dc=b2international,dc=com
    objectClass: inetOrgPerson
    objectClass: organizationalPerson
    objectClass: person
    objectClass: top
    cn: John Doe
    sn: Doe
    uid: [email protected]
    userPassword: <encrypted_password>
    dn: cn=Browser,dc=snowowl,dc=b2international,dc=com
    objectClass: top
    objectClass: groupOfUniqueNames
    cn: Browser
    description: browse:*
    description: export:*
    uniqueMember: cn=John [email protected],dc=snowowl,dc=b2international,dc=com
    GET /branches/:path
    Status: 200 OK
    {
      "name": "MAIN",
      "baseTimestamp": 1431957421204,
      "headTimestamp": 1431957421204,
      "deleted": false,
      "path": "MAIN",
      "state": "UP_TO_DATE"
    }
    GET /branches
    Status: 200 OK
    {
      "items": [
        {
          "name": "MAIN",
          "baseTimestamp": 1431957421204,
          "headTimestamp": 1431957421204,
          "deleted": false,
          "path": "MAIN",
          "state": "UP_TO_DATE"
        }
      ]
    }
    POST /branches
    {
      "parent" : "MAIN",
      "name" : "branchName",
      "metadata": {}
    }
    Status: 201 Created
    Location: http://localhost:8080/snowowl/snomed-ct/v3/branches/MAIN/branchName
    DELETE /branches/:path
    Status: 204 No content
    POST /merges
    {
      "source" : "MAIN/branchName",
      "target" : "MAIN"
    }
    Status: 202 Accepted
    Location: http://localhost:8080/snowowl/snomed-ct/v3/merges/2f4d3b5b-3020-4e8e-b046-b8266967d7dc
    POST /merges
    {
      "source" : "MAIN",
      "target" : "MAIN/branchName"
    }
    Status: 202 Accepted
    Location: http://localhost:8080/snowowl/snomed-ct/v3/merges/c82c443d-f3f4-4409-9cdb-a744da336936
    GET /merges/c82c443d-f3f4-4409-9cdb-a744da336936
    {
      "id": "c82c443d-f3f4-4409-9cdb-a744da336936",
      "source": "MAIN",
      "target": "MAIN/branchName",
      "status": "COMPLETED",
      "scheduledDate": "2016-02-29T13:52:45Z",
      "startDate": "2016-02-29T13:52:45Z",
      "endDate": "2016-02-29T13:53:06Z"
    }
    DELETE /merges/c82c443d-f3f4-4409-9cdb-a744da336936
    Status: 204 No content

    Binary scripts including startup/shutdown to start/stop the instance

    $SO_HOME/bin

    conf

    Configuration files including snowowl.yml

    $SO_HOME/configuration

    data

    The location of the data files and resources.

    $SO_HOME/resources

    path.data

    logs

    Log files location.

    $SO_HOME/serviceability/logs

    Type

    Description

    Default Location

    Setting

    home

    Snow Owl home directory or $SO_HOME

    Directory created by unpacking the archive

    configure Snow Owl
    important Snow Owl settings
    important system settingsarrow-up-right

    bin

    specarrow-up-right
    implementationarrow-up-right
    specarrow-up-right
    implementationarrow-up-right
    specarrow-up-right
    implementationarrow-up-right
    draftarrow-up-right
    implementationarrow-up-right
    see reference implementationarrow-up-right
    build status
    latest release
    downloads
    GitHub
    codecov
    FOSSA Status
    FOSSA Status

    FHIR API

    hashtag
    FHIR API

    Fast Healthcare Interoperability Resources (FHIR) specifies resources, operations, coded data types and terminologies that are used for representing and communicating coded, structured data in the FHIR core specification within its Terminology Module.

    Snow Owl's pluggable and extensible architecture allows modular development of the FHIR API both in terms of the supported functionality as well as the exposed terminologies. Additionally, Snow Owl's revision-based model allows the concurrent management of multiple versions.

    #Resources

    The Snow Owl terminology server's FHIR API release includes support for the following resources:

    #Implementation

    ##Versions

    Snow Owl's repository is a fully-fledged revision control system with branches, versions and revisions. Snow Owl's terminology artefact versions are exposed as FHIR versions for every supported code system with the exception of SNOMED CT where the standard SNOMED CT URI specification governs the format (short date) of the version. If there is no version specified in a request, the last version is assumed. If there is no version in the system, the last state (head of MAIN) is considered.

    hashtag
    Search

    The supported search result filters:

    • _summary

    • _elements

    The supported search parameters:

    • _id

    hashtag
    Sorting and paging

    Sorting and paging are not yet supported.

    hashtag
    URIs

    Globally unique logical URIs that represent a terminology resource. For code systems these are:

    Code system
    URI

    hashtag
    SNOMED CT

    For SNOMED CT, Snow Owl's FHIR implementation follows the .

    hashtag
    ICD-10

    For ICD-10, Snow Owl's FHIR implementation follows the .

    hashtag
    Local Code System

    Snow Owl's Local Code Systems (LCS) identified by the URI that is based on the Organization Link property stored within Snow Owl's Terminology Registry and the Short Name of the LCS e.g.: https://b2ihealthcare.com/MyLocalCodeSystem.

    hashtag
    IDs

    The id field of each terminology resource is assigned by our terminology server and is unique within Snow Owl. Once is has been assigned, the id never changes. For this logical identifier, Snow Owl follows the pattern:

    For example to identify a particular LOINC code system with the version tag 20180131:

    For example to address a particular SNOMED CT concept (Blood bank procedure):

    where

    • 59524001 represents the concept id

    • 20140203 represents the extension version

    • DK represents the extension branch

    Our logical id has been extended to cover individual Reference Set members as well:

    where

    • 98403008 is the Reference Set ID

    • 98484f56f72-9f8b-423d-98b8-25961811393c03008 is the reference set member

    hashtag
    Snow Owl's extension API

    Snow Owl exposes a comprehensive REST API to support areas such as:

    • Syndication - content provisioning between servers or between the Snow Owl Authoring platform and servers

    • Administration (repository and revision control management)

    • Auditing

    hashtag
    REST API

    Currently only JSON format is supported with UTF-8 encoding and content type of Content-Type = application/fhir+json;charset=utf-8. In case of any errors during the processing the API responds with an OperationOutCome within the response body using one of the HTTP status codes:

    HTTP Status
    Reason
    wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.zip
    wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.zip.sha512
    shasum -a 512 -c snowowl-oss-<version>.zip.sha512 # compares the SHA of the downloaded archive, should output: `snowowl-oss-<version>.zip: OK.`
    unzip snowowl-oss-<version>.zip
    cd snowowl-oss-<version>/ # This directory is known as `$SO_HOME`
    wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.tar.gz
    wget https://github.com/b2ihealthcare/snow-owl/releases/download/<version>/snow-owl-oss-<version>.tar.gz.sha512
    shasum -a 512 -c snowowl-oss-<version>.tar.gz.sha512 # compares the SHA of the downloaded archive, should output: `snowowl-oss-<version>.tar.gz: OK.` 
    tar -xzf snowowl-oss-<version>.tar.gz
    cd snowowl-oss-<version>/ # This directory is known as `$SO_HOME`
    ./bin/startup
    curl http://localhost:8080/snowowl/admin/info
    {
      "version": "7.0.0",
      "description": "You Know, for Terminologies",
      "repositories": {
        "items": [
          {
            "id": "snomedStore",
            "health": "GREEN"
          }
        ]
      }
    }
    nohup ./bin/startup > /dev/null &
    kill <pid>
    ./bin/shutdown
    SO_PATH_CONF

    Descriptions

    Coming soon!

    FHIR

    Prefixed with http://hl7.org/fhir

    LCS

    Prefixed with the organization link

    Value Set

    Prefixed with the source URI

    Mapping Set

    Prefixed with the source URI

    20110131 represents the version of the International Edition the DK extension is based on
    SNOMED CT specific browsing and authoring API

    404

    Not Found

    500

    Internal Error

    ATC

    http://www.whocc.no/atc

    SNOMED CT

    http://snomed.info/sct

    ICD-10

    http://hl7.org/fhir/sid/icd-10

    LOINC

    200

    OK

    400

    Bad Request

    401

    Unauthorized

    403

    CodeSystem API
    ValueSet API
    ConceptMap API
    SNOMED CT URI Standardarrow-up-right
    HL7 FHIR Specificationarrow-up-right

    http://loinc.org

    Forbidden

    repository:{branchPath}:{code}[|{member}]
    loincStore:MAIN/20180131
    snomedStore:MAIN/201101031/DK/20140203:59524001
    snomedStore:MAIN/201101031/DK/20140203:98403008|84f56f72-9f8b-423d-98b8-25961811393c 

    Curator

    hashtag
    Prerequisites

    Please refer to the official Curator install guidearrow-up-right on how to install it on various operating systems.

    hashtag
    Configure Snapshot repository

    In order to create backups for Snow Owl, you need a repository in your Elasticsearch cluster.

    To create a repository (assuming shared file system repository, fs), execute the following command:

    Elasticsearch requires that the specified /path/to/shared/mount is whitelisted in the path.repo configuration setting in the elasticsearch.yml configuration file. See section of the Elasticsearch reference for details.

    hashtag
    Curator configuration file

    Curator requires a single configuration file to be specified when running it. If you are using a default Elasticsearch cluster with default configurations then the default Curator recommended file should be sufficient. Any configuration changes you have made to your Elasticsearch cluster needs to be changed here as well in this config file so Curator can access your cluster without any issues.

    Example curator.yml:

    hashtag
    Snapshot Action

    Curator is using action YML files to perform a set of actions sequentially. See the available steps here:

    A Snapshot Action that can be used to backup the content from a Snow Owl Terminology Server.

    Example snowowl_snapshot.yml file:

    To execute a Snapshot action manually, you can use the following command:

    hashtag
    Restore Action

    A Restore Action that can be used to restore the latest snapshot (aka backup) to the Snow Owl Terminology Server.

    Example snowowl_restore.yml file:

    To execute a Restore action manually, you can use the following command:

    hashtag
    Taking scheduled backups

    To schedule automated backups, you can use on Unix-style operating systems to automate the job. The back up interval depends on your use case and how you are accessing the data. If you have a write-heavy scenario, we recommend a hourly backup interval, otherwise some value between hourly - daily is preferable.

    An example crontab entry that initiates a daily backup at 03:00, and captures Curator's output to /var/log/backup.log (both standard output and standard error) would look like this:

    Docker
    Shared file system repositoryarrow-up-right
    https://www.elastic.co/guide/en/elasticsearch/client/curator/5.8/actions.htmlarrow-up-right
    Cronarrow-up-right
    $ curl -XPUT localhost:9200/_snapshot/snowowl-snapshots -d
    {
      "type": "fs",
      "settings": {
        "location": "/path/to/shared/mount",
        "compress": true
      }
    }
    client:
      hosts:
        - 127.0.0.1
      port: 9200
      url_prefix:
      use_ssl: False
      certificate:
      client_cert:
      client_key:
      ssl_no_validate: False
      http_auth:
      timeout: 30
      master_only: False
    
    logging:
      loglevel: INFO
      logfile:
      logformat: default
      blacklist: ['elasticsearch', 'urllib3']
    actions:
      1:
        action: snapshot
        description: >-
          Snapshot all indices. Wait for the snapshot to complete. Do not skip
          the repository filesystem access check.
        options:
          repository: snowowl-snapshots
          name:
          ignore_unavailable: False
          include_global_state: True
          partial: False
          wait_for_completion: True
          skip_repo_fs_check: False
          disable_action: False
        filters:
        - filtertype: none
      2:
        action: delete_snapshots
        description: >-
          Keep 10 most recent snapshots in the selected repository
          (based on creation_date), for 'curator-' prefixed snapshots. Ordering
          is age-based and reversed by default for the 'count' filter.
        options:
          repository: snowowl-snapshots
          disable_action: False
          ignore_empty_list: True
        filters:
        - filtertype: pattern
          kind: prefix
          value: curator-
          exclude:
        - filtertype: count
          count: 10
          use_age: True
          source: creation_date
    $ curator --config curator.yml snowowl_snapshot.yml
    actions:
      1:
        action: restore
        description: >-
          Restore all indices in the most recent curator-* snapshot with state SUCCESS. Wait
          for the restore to complete before continuing. Do not skip the repository
          filesystem access check.
        options:
          repository: snowowl-snapshots
          # If name is blank, the most recent snapshot by age will be selected
          name:
          # If indices is blank, all indices in the snapshot will be restored
          indices:
          include_aliases: False
          ignore_unavailable: False
          include_global_state: False
          partial: False
          rename_pattern:
          rename_replacement:
          extra_settings:
          wait_for_completion: True
          skip_repo_fs_check: False
          disable_action: False
        filters:
        - filtertype: pattern
          kind: prefix
          value: curator-
        - filtertype: state
          state: SUCCESS
    $ curator --config curator.yml snowowl_restore.yml
    0 3 * * * /path/to/curator --config /path/to/snowowl/configs/curator.yml /path/to/snowowl/configs/curator/snowowl_snapshot.yml > /var/log/backup.log 2>&1