Snow Owl Docs
9.x
9.x
  • Introduction
  • Quick Start
    • Create your first Resource
    • Import SNOMED CT
    • Find concepts by ID or term
    • Find concepts using ECL
    • Next steps
  • Setup and Administration
    • Plan your deployment
      • Technology stack
      • Hardware requirements
      • Software requirements
    • Configuration
      • Release package
      • Folder structure
      • Get SSL certificate (optional)
      • Preload dataset (optional)
      • Configure Elastic Cloud (optional)
      • System settings
      • Spin up the service
    • Upgrade Snow Owl
    • Backup and restore
      • Backup
      • Restore
    • User management
    • Advanced installation methods
      • Install Snow Owl
        • Using an archive
        • Using RPM
        • Using DEB
      • System configuration
        • Disable swapping
        • File descriptors
        • Virtual memory
        • Number of threads
      • Configure Snow Owl
      • Start Snow Owl
      • Stop Snow Owl
    • Advanced configuration
      • Setting JVM options
      • Logging configuration
      • Elasticsearch configuration
      • Security
        • File realm
        • LDAP realm
  • Terminology Standards
    • SNOMED CT
      • Extensions and Snow Owl
      • Scenarios
        • Single Edition
        • Single Extension Authoring
        • Multi Extension Authoring
      • Development
      • Releases
      • Upgrading
    • LOINC
    • Socialstyrelsen Standards
      • ICD-10-SE
      • ICF
      • KVÅ (KKÅ/KMÅ)
  • Content syndication
  • REST APIs
    • FHIR API
      • CodeSystem
      • ValueSet
      • ConceptMap
    • Native API
      • Resource management
      • Content access
      • Content management
      • SNOMED CT API
        • Branching
        • Compare
        • Concepts
        • Reference Sets
  • Release notes
Powered by GitBook
On this page
Export as PDF
  1. Setup and Administration
  2. Configuration

Preload dataset (optional)

Last updated 1 year ago

In certain cases, a pre-built dataset is also shipped together with the Terminology Server. This is to ease the initial setup procedure and get going fast.

This method is only applicable to deployments where the Elasticsearch cluster is co-located with the Terminology Server.

To load data into a managed Elasticsearch cluster, there are several options:

  • use

  • use

  • use Snow Owl to rebuild the data to the remote cluster

These datasets are the compressed form of the Elasticsearch data folder which follows the same structure. Except for having a top folder called indexes . This is the same folder as in ./snow-owl/resources/indexes . So to be able to load the dataset one should just extract the contents of the dataset archive to this path.

tar --extract \
    --gzip \
    --verbose \
    --same-owner \
    --preserve-permissions \
    --file=snow-owl-resources.tar.gz \
    --directory=/opt/snow-owl/resources/

chown -R 1000:0 /opt/snow-owl/resources

Make sure to validate the file ownership of the indexes folder after decompression. Elasticsearch requires UID=1000 and GID=0 to be set for its data folder.

cross-cluster replication
snapshot-restore