The Terminology Server is recommended to be installed on x86_64 / amd64 Linux operating systems where Docker Engine is available. See the list of supported architectures by Docker.
Here is the list of distributions that we suggest in the order of recommendation:
Ubuntu LTS releases
Debian LTS releases
CentOS 7 (deprecated)
It is possible to install the server release package on other distributions but bear in mind that there might be limitations.
Before starting the production deployment of the Terminology Server make sure that the following packages are installed and configured properly:
Docker Engine
ability to execute bash scripts
In case a reverse proxy is used, the Terminology Server requires two ports to be opened either towards the intranet or the internet (depending on usage):
http
: port 80
https
: port 443
In case there is no reverse proxy installed, the following port must be opened to be able to access the server's REST API:
http
: port 8080
For installations where Snow Owl and Elasticsearch are co-located, we recommend the following hardware specifications:
Snow Owl & co-located ES | Cloud | Dedicated |
---|---|---|
For installations where Snow Owl connects to a managed Elasticsearch cluster at elastic.co we recommend the following hardware specifications:
Snow Owl | Cloud | Dedicated |
---|---|---|
Elasticsearch @ elastic.co | |
---|---|
In case Snow Owl is planned to be used with resource-intensive workloads (large code system upgrades, frequent classification of terminologies, bulk authoring) an 8 vCPU / 4 GB Elasticsearch cluster might not be sufficient. Consider increasing the size of the hosted Elasticsearch instance gradually, so that finding the sweet spot will be straightforward.
Here are a few examples of which Virtual Machine types could be used for hosting the Terminology Server at the three most popular Cloud providers (including but not limited to):
The technology stack behind the Terminology Server consists of the following components:
The Terminology Server application
Elasticsearch as the data layer
Optional: Authentication/Authorization service
Either an OpenID Connect/OAuth2.0 compatible external service with JSON Web Token support
Or an LDAP-compliant directory service
Optional: A reverse proxy handling the requests towards the REST API
Outgoing communication from the Terminology Server goes via:
HTTP(s) towards Elasticsearch and to the external OpenID Connect/OAuth2 authorization server
LDAP(s) towards the A&A service
Incoming communication is handled through the HTTP port 8080.
A selected reverse proxy channels all incoming traffic through to the Terminology Server.
Elasticsearch versions supported by each major version of Snow Owl:
The Elasticsearch cluster can either be:
a co-located, single-node, self-hosted cluster
With a preconfigured domain name and DNS record, the default installation package can take care of requesting and maintaining the necessary certificates for secure HTTP. See the details of this in the Configuration section.
For simplifying the initial setup process we are shipping the Terminology Server with a default configuration of a co-located Elasticsearch cluster, a pre-populated OpenLDAP server, and an NGINX reverse proxy with the ability to opt-in for an SSL certificate.
Cloud Provider | VM type |
---|---|
Snow Owl 7.x | Snow Owl 8.x | Snow Owl 9.x |
---|
a managed Elasticsearch cluster hosted by
Having a co-located Elasticsearch service next to the Terminology Server directly impacts the hardware requirements. See our list of recommended hardware on the .
For authorization and authentication, the application supports external OpenID Connect/OAuth2 compatible authorization services (eg. Auth0) and any traditional LDAP Directory Servers. We recommend starting with and evolving to other solutions later because it is easy to set up and maintain while keeping Snow Owl's user data isolated from any other A&A services.
A reverse proxy, such as is recommended to be utilized between the Terminology Server and either the intranet or the internet. This will increase security and help with channeling REST API requests appropriately.
vCPU
8
8
Memory
32 GB
32 GB
I/O performance
>= 5000 IOPS SSD
>= 5000 IOPS SSD
Disk space
200 GB
200 GB
vCPU
8 (compute optimized)
8
Memory
16 GB
16 GB
I/O performance
OS: balanced disk
TS file storage: local SSD
OS: HDD / SSD
TS file storage: SSD
Disk space
OS: 20 GB
TS file storage: 100 GB
OS: 20 GB
TS file storage: 100 GB
vCPU
8 (compute optimized)
Memory
4 GB
I/O performance
handled by elastic.co
Disk space
180 GB
GCP
AWS
Azure
Elasticsearch 7.x |
Elasticsearch 8.x |
(deprecated)