Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This section provides information and instructions to support Clarity LIMS and administration tasks. Topics covered include
If you require assistance with Clarity LIMS administration, contact the Illumina Support Team.
If necessary, the Illumina Support team can provide a backup of your Clarity LIMS data. The data are contained in an encrypted file, which can be downloaded from a secure SFTP server.
To receive the backup data file, provide the Illumina Support team with a GNU Privacy Guard (GPG) public key.
For instructions on generating a GPG public key, see the following documentation:
For Microsoft® Windows®, see Creating a certificate.
For Linux® or Mac®, see Generating a new GPG key.
After the Illumina Support team has received the GPG public key, they do the following actions:
Create a backup file encrypted with the key.
Place the backup file on the LIMS SFTP server at sftp.clarity-lims.com.
Provide a username and password so you can access your data. The backups are added to the FTP server weekly.
After downloading the backup file, there are several tools available to decrypt the data.
For Windows, use the gpg4win tool. For details, see the Gpg4win Compendium.
For Mac/Linux, use the GPG command on the command line. For details, see Encrypting and decrypting documents.
If you use, or would like to use, an LDAP server to consolidate directory services, it is possible to integrate LDAP with Clarity LIMS.
The Clarity LIMS LDAP solution allows for the following features:
User name and password authentication against LDAP to govern access to Clarity LIMS.
Ongoing unidirectional synchronization of user information (such as first name, last name, title, phone, fax, and email) from LDAP to Clarity LIMS. For example, if your telephone number is changed in the LDAP directory, the information is pushed down to Clarity LIMS, keeping contact information current.
Automated unidirectional provisioning of user accounts from LDAP to Clarity LIMS. For example, adding a user to a particular group within the LDAP directory automatically results in a local account with LDAP authentication being added to Clarity LIMS.
Our Field Application Specialist (FAS) team meets with you to discuss the current LDAP implementation. In preparation for this meeting, collect the following information:
The type of provisioning you would like to use to synchronize Clarity LIMS with LDAP (automatic or manual).
A list of the LDAP attributes the current system uses to record the following user properties: first name, last name, title, phone number, fax number, and e‐mail address.
NOTE: When integrating Clarity LIMS with LDAP, the LIMS database and the LDAP directory remain as separate and distinct entities.
Clarity LIMS is tested with the following LDAP servers:
ApacheDS 1.5 and later
Microsoft Active Directory (Windows Server 2003 or later)
OpenLDAP 2.3.35 and later
While user provisioning and authentication are handled with LDAP, a Clarity LIMS system administrator completes the following steps:
Determine the level of access that a user requires.
Modify the userʹs account within the LIMS to provide that access.
Once an LDAP integration with Clarity LIMS is established, all changes to user profiles must be made from the LDAP server.
Only automatic user provisioning is available.
With automatic user provisioning, Clarity LIMS users are created automatically by a provisioning tool that periodically synchronizes the LDAP server with the LIMS.
To make use of the LDAP directory services, Clarity LIMS maps to specific LDAP attributes within a defined schema.
However, the directory structure used can vary among installations. Our Field Applications Specialist (FAS) team work with you to complete the following items:
Analyze a specific LDAP solution and directory organization or assist with the selection and initial configuration of an LDAP service.
Discuss the user elements that will be synchronized between the LDAP service and Clarity LIMS systems.
Configure LDAP to connect to your Clarity LIMS systems.
User authentication is handled in the Clarity LIMS.
In previous versions of Clarity LIMS, a few customers reported slow response time for the REST API when using LDAP users for authentication. As of Clarity LIMS v5.2.x / v4.3.x, the REST API response time has improved by introducing a new feature that caches user authentication results through a new property (api.session.timeout).
To make use of the new feature, do the following actions:
Make sure that api.session.timeout property is set.
Include the HTTP Connection & Authorization request headers and session cookie in the HTTP request.
Stored in the Clarity LIMS database table, the api.session.timeout property allows you to specify the period of time for which a user's session should persist, after they have been authenticated.
This property is set during installation or upgrade of the LIMS. The default value is 5 minutes. If necessary, update the value using the omxprops-ConfigTool.jar tool at the following location:
For example:
For this configuration to take effect, stop and restart Tomcat:
To persist user authentication, the HTTP request must contain the following HTTP request headers:
Request Header
Connection: Keep-Alive
Authorization: Basic <credentials>
The HTTP request headers are required for the initial request, and for any subsequent request to get a valid JSESSIONID. Additional scenarios are described in the following table.
To make sure that a valid authenticated session is provided if the cookie in the request has expired, also provide the following JSESSIONID cookie:
Cookie
JSESSIONID=<a valid JSESSIONID from the initial request>
The following table lists the various combinations of HTTP Authorization request header and JSESSIONID cookie and their expected result. It assumes that the HTTP Connection request header is provided for all scenarios.
This article provides best practice recommendations and guidelines for the following procedures:
Backing up and restoring the BaseSpace Clarity LIMS database
Creating an archive of the Audit Trail database (for details on this feature, see the Enabling, Validating and Disabling Audit Trail), including performing a 'vacuum and analyze' database maintenance task to reclaim space and optimize performance.
Note that BaseSpace Clarity LIMS works with a variety of industry-standard operating systems and databases, and leverages their inherent file management systems. You should follow the guidelines that best meet the needs of your specific environment.
The user performing the procedures described below must:
Be a Linux administrator who can create and restore database backups and associated file system backups, using standard Linux tools.
Know how to start and stop BaseSpace Clarity LIMS and its installed services.
Have access to the source and destination servers.
The BaseSpace Clarity LIMS file store is located in /home/glsftp or /opt/gls/clarity/users/glsftp. If a different location is being used, substitute that directory in the relevant steps below.
The destination server has been integrated with the BaseSpace Clarity LIMS repository.
BaseSpace Clarity LIMS has been successfully validated, and is using an independent database.
The Audit Trail feature is enabled on the BaseSpace Clarity LIMS server (required for Audit Trail archiving procedure only).
Your database backup process will depend on the size of your installation, your hardware environment, and how much data your laboratory processes. However, we recommend that you:
Perform a full backup of the BaseSpace Clarity LIMS database at least once per week, and configure the database server to use archived logging.
Perform a full backup of the BaseSpace Clarity LIMS file system at least once per week, and perform incremental backups daily.
Use a file storage solution that has fail-over capabilities. For example, a RAID array.
You do not need to back up any other BaseSpace Clarity LIMS-specific data. The LIMS does not modify files once imported into the system and configuration information is stored in the database.
An incremental backup is a recovery log (or redo log) that records database changes. Once configured, the relational database management system automatically updates the logs.
When performing incremental backups, there are a number of backup strategies and configurations, including the archival of recovery logs, which will depend on your environment. Your database administrator should establish a robust backup process that suits your organization’s environment and follows the database vendor’s administrative guidelines.
To improve performance and reduce risk, we recommend that you segregate the recovery logs onto a physical disk channel or device separate from where you store the data.
For best performance and to mitigate potential maintenance issues associated with disk space and large database size, it is recommended that you periodically archive the Audit Trail database. The frequency of performing this operation will differ depending on regulatory requirements.
The following steps should be executed by the root user, on the source server.
Print a list of BaseSpace Clarity LIMS installed components and note the version numbers (the version number should match the version of the LIMS previously installed):
For example:
Create an export of the database that can be restored on the destination system.
Archive the customextensions directory: /opt/gls/clarity/customextensions
Back up the configuration file: /etc/httpd/conf.d/clarity.conf
To back up configuration and all attached files, archive the BaseSpace Clarity LIMS file store: /home/glsftp -or- /opt/gls/clarity/users/glsftp \
To back up configuration only, archive the following:
/home/glsftp/*Scripts
/home/glsftp/ProcessType
/home/glsftp/Protocol \
Backup SSL certificates: /etc/httpd/sslcertificates/.
Once the archives have been created, you can transfer them to the destination server.
Execute the following steps on the destination server.
Make sure the BaseSpace Clarity LIMS repository file exists in /etc/yum.repos.d/
As the root user, install the components listed in file.txt.
3. Configure and validate the destination system, following the procedure outlined in Installation Procedure. 4. Stop the application server stack:
Restore the database exported from the source server.
Restore the archives created.
As the glsjboss user, update configuration for external interface points (api, glsftp):
As the glsjboss user, manually migrate the database forward:
(Optional) To ensure that you have the correct Automated Informatics (AI) credentials, you may also want to rerun the AI configuration script. As the glsai user, from the /opt/gls/clarity/config/ directory run:
Restart the application stack:
After restoring a system, the following scenarios may apply:
The database and file system are from the same point in time
The database is newer than the file system
The file system is newer than the database
When you have synchronized file system and database backups, the system has maintained its integrity and no further action is required.
When the file system backup is older than the database backup, files referenced in the database may no longer exist in the file system.
For example, suppose you have a database backup from 10am and a file system backup from 8am. To update the database, roll back the database and manually re-import the files into BaseSpace Clarity LIMS from their source locations.
To roll back and update the database:
Roll back the file system to the point of the last backup. In the example above, this is 8am.
In the current database, select all the files whose creation date is after the last backup of the file system. This finds all the files imported into the database that will not be in the file system backup. In the example above, these are the files created between 8am and 10am.
For each of the missing files found in step 2, find and record the associated LIMS projects, samples, and processes.
Roll back the database to the time of the last file system backup. In the example above, this is earlier than 8am.
Recreate the associated projects, samples, and processes as recorded in step 3.
Re-import the files. You can find the origin of the files by looking at the database records queried in step 2.
The database and file system are once again synchronized.
Finding missing files
It can sometimes be difficult to find missing files as there may be no way to recover them from their source locations (such as instrument computers).
In this case, we recommend that you retain database entries for the missing files. This will display an error message notifying the user when they try to retrieve the file in the LIMS, yet it will not affect the operation of the system.
If the files are present in their source locations, and you use the Automated Informatics Automatic Data Capture (ADC) plug-in, you can reset the ADC to re-capture the files and reference them in the database automatically.
To do this, delete the corresponding entries from the file transfer log file stored in the logs folder of the instrument’s ADC installation.
When the database backup is older than the file system backup, files residing in the file system may not be referenced in the database. To update the database, you'll need to manually re-import the files into the LIMS.
To update the database:
In the file system, search for files created after the last database backup.
In BaseSpace Clarity LIMS, re-import the files into the applicable projects.
It is sometimes difficult to determine the location(s) in which to re-import files in the LIMS. You can manually check for files created by processes that ran during the outage, but ultimately, there is no easy way to do this.
The Audit Trail Archiver tool archives the audit tables from a given date, saves them to a file, and remove those records from the Audit Trail database.
For instructions on using the tool, see the Enabling, Validating and Disabling Audit Trail article in the Audit Trail section.
The tool currently only supports Postgres databases. Oracle is not supported at this time.
The tool prompts for a Postgres superuser as the Postgres COPY command depends upon it
When archiving audit data that was created in a version of the LIMS prior to version 4.1, the tool will temporarily disable the audit trigger on the loginaudit table. This prevents the loginaudit delete operations performed by the tool being included in the auditchangelog. Once the deletes have been performed, the trigger is re-enabled.
Follow the steps below to create the Audit Trail archive. Once you have successfully created the archive, perform a vacuuming and analyzing database maintenance task to reclaim database space and optimize performance.
Run the Audit Trail Archiver tool using the following command:
A warning message displays:
An answer of anything other than Y or Yes (case does not matter) will abort the tool.
All audit entries in the loginaudit, auditeventlog, and auditchangelog tables older than the supplied date are archived to a zip file.
The file contains comma-separated values (CSV) files of exported audit table data.
The name of the zip file indicates the 'before date' that was used to archive the audit data and the date on which the archive was generated.
For example, suppose that on January 15, 2017 you generate an Audit Trail archive and supply the date December 31, 2016 to the tool:
The name of the generated zip file will be: ClarityAuditTrailArchive_before2016-12-31_generated2017-01-15-213246.zip
After the archive is complete and records have been deleted, an audit entry is added to the auditeventlog. The message entry in the auditeventlog indicates where the archive was generated. For example:
The Audit Trail Archiver tool does not produce a log file. All output is directed to stdout and stderr. An example output is shown below:
The Archiver tool will exit with a -1 return code and an informative message for the following error conditions:
One or more of the required arguments are not specified
Invalid date format provided for the -date argument
No servers found in the tenant lookup database<
Multiple servers found, but the -F argument was not supplied
The destination directory either does not exist or is not writable for the database user
The Archiver tool cannot obtain the command line console to prompt for the warning
Could not open the jdbc.properties file.
Could not find tenant lookup database server information: jdbc.url, jdbc.username, jdbc.password, jdbc.driverClassName, jdbc.tenantUrl
Template properties are not specified in jdbc.properties
Could not establish database connection
Unrecognized fully qualified domain name
Could not find tenant database server information for the FQDN
The zip file was not created
After completing an Audit Trail archive, you can use the vacuumdb operation described below to vacuum and analyze the database. This operation will reclaim space and optimize performance.
Before running the vacuumdb operation, please note the following:
The steps below are provided for illustration purposes only and assume a Software as a Service (SaaS) installation.
Paths shown assume a SaaS installation. In a typical on-premise installation, replace **/opt/gls/pgsql/**paths with /var/lib/pgsql/ However, note that this path may be changed. Confirm the correct path for your system.
Other options may vary depending on your Postgres version and database setup.
Check the database size on disk:
Stop BaseSpace Clarity LIMS, BaseSpace Clarity LIMS Reporting (if applicable), and all sequencing services (if applicable).
Restart PostgreSQL to drop any remaining connections to the database.
Run the vacuum command with Full (-f) and Analyze (-z) options in verbose (-v) mode:
Re-check the database size on disk:
Restart BaseSpace Clarity LIMS, BaseSpace Clarity LIMS Reporting, and all sequencing services, as applicable.
You can import archived data back into a database using the Postgres COPY command.
Unzip the archive.
Run the COPY command in Postgres for each archived table. For example:
This section explains how to use the LDAP Checker tool a script (ldap-checker.jar) that checks and reports on an LDAP configuration. Instructions for use are also provided in the README.txt file that accompanies the tool.
The ldap-checker script is included with the Clarity LIMS installation and is available at the following location:
/opt/gls/clarity/tools/ldap-checker
The ldap-checker script performs numerous checks of the LDAP configuration and reports on any incorrect items found.
Point the script to one or more files containing (at a minimum) the database connection properties. Alternatively, set these properties from the command line.
The script loads properties from the following sources and in the following order:
Any JDBC properties files specified with -f (see the table for options).
If multiple properties files specify the same property, the last file is used.
Any Java system properties specified on the command line using
Properties specified on the command line are only checked if they do not appear in the properties files.
The properties table in the database.
The properties table is only checked if the same property is not already specified in the properties file or on the command line.
After the script has the basic database connection properties, it loads further settings from the corresponding Clarity LIMS database.
The following JDBC properties are required:
jdbc.driverClassName
jdbc.url
jdbc.username
jdbc.password
Options:
Change to the directory containing ldap-checker tool:
Run the script. To specify a properties file, use the following example:
The tool includes an example database.properties file. This example shows a properties file that is specified with the -f option.
The following options are available:
Edit this file and use it.
Provide properties on the command line, using: -D.
For example:
Specify and provide the path to the keystore:
To check a set of specific users (even those that have not been provisioned), use the following script:
To override properties that are typically loaded from the properties table, use command-line system properties or one or more properties files.
Using system property ( -D options must be specified before the -jar option):
Using multiple properties files:
In this example, Custom-ldap.properties might resemble the following:
You can configure an alias for the short and long, singular, and plural forms of the term Project, as displayed in the Clarity LIMS interface.
Renaming is achieved by configuring the following properties, using the omxprops-ConfigTool.jar tool at:
clarity.alias.project.short.singular - The short term for "Project"
clarity.alias.project.short.plural - The short term for "Projects"
clarity.alias.project.full.singular - The full term for "Project"
clarity.alias.project.full.plural - The full term for "Projects"
clarity.alias.project.short.singular—Maximum of eight characters.
clarity.alias.project.short.plural—Maximum of nine characters.
If multiple words are used, capitalize each word (eg, Test Requests).
Short form singular and plural aliases are truncated to eight characters and nine characters respectively. An ellipsis is used to indicate truncation.
Clarity LIMS creates various log files to help with the resolution of issues. During support request investigation, the Support team may ask for the following types of log files:
Automation Worker creates history and log files, and stores them on laboratory computers in the logs folder of the Automation Worker installation directory.
If Automation Worker is installed on a Windows machine using the program default, find the logs folder at the following location:
If Automation Worker is installed on a Linux server, find the \logs folder at the following location:
The following log files are available:
wrapper.log - This log file outputs information on the starting, running, and stopping of the Automatic Informatics service.
automatedinformatics.log - This log file outputs messages from installed plug-ins, such as automation commands and ADC directory scans.
Log on to the server using the glsai user ID and run the following command:
If the Automation Worker is installed on a server other than the Clarity LIMS server, use the appropriate user credentials.
In the web browser, if the LIMS interface does not display items/elements correctly, provide the information and error messages to the Illumina Support team.
Instructions for finding error messages within the browser console are described in the following sections.
To Start the Chrome Console:
Right-click on an element in the browser and select 'inspect element.'
A sub window opens below the main window in Chrome, showing the source HTML.
Select the Console tab, and reload the troublesome page - any JavaScript errors will be reported there. Include these errors in the Support Request ticket.
NOTE: Between stages in a protocol step you may see errors of the following type:
Such messages are expected. This is the EPP trigger checking that there is no EPP transition to fire on the page change. (This can be annoying for debug purposes, but feel free to include these in the Support Request ticket.)
To Get the JavaScript version:
Open up the Console as described in the previous section.
Go to the Network tab.
Select 'scripts' from the options listed at the bottom of the tab.
A script named isis-all.js?v=XXXXX displays.
Determine the version build number. (In the previous example, XXXXX represents the version build number).
To Start the FireFox Console:
Right-click on an element in the browser and select 'inspect element.'
A sub window opens below the main window in Firefox, showing the source HTML
Select the Console tab, and reload the troublesome page - any JavaScript errors will be reported there. Include these errors in the Support Request ticket.
NOTE: Between stages in a protocol step you may see errors of the following type:
Such messages are expected. This is the EPP trigger checking that there is no EPP transition to fire on the page change. (This can be annoying for debug purposes, but feel free to include these in the Support Request ticket.)
To Get the JavaScript version:
Open up the Console as described in the previous section.
In the Filter options Search box and type 'isis'.
A script named isis-all.js?v=XXXXX appears.
Hover over this script with your mouse to find the V (version) build number.
If you are experiencing problems and need to submit a support request, use the following guidelines to determine which log files to send to the Illumina Support team:
basespace-lims-*.log: Include if experiencing slowness in the application. (Default path: /opt/gls/clarity/tomcat/current/logs/)
automatedinformatics.log: Include if you are experiencing problems with an integration or if a process using an EPP string does not work as expected. (Default path: /opt/gls/clarity/automation_worker/)
wrapper.log: Include if the Automation Worker is unable to start (rarely needed).
search-indexer.log: Include if there is issue with search feature. (Default path: /opt/gls/clarity/search-indexer/logs/)
claritylims.log: Include if there is issue with search feature. (Default path: /var/log/elasticsearch/)
Browser Console and LIMS JavaScript version: Include for any web interface display issues. A simple refresh of the browser page may resolve the issue. However, the Support team would prefer receiving the console log and JavaScript version to investigate and make product improvements.
By default, Clarity LIMS allows duplicate sample names within the same project. If you would like to enforce sample name uniqueness within a project, you can do so.
Two scripts have been developed to support this requirement:
SampleNamePerProjectUniqueConstraintStep: Apply this uniqueness constraint to enforce unique sample names within a project.
CleanupDuplicatedSampleNamesPerProjectStep: Prior to running the uniqueness constraint, use this optional cleanup script to clean up a database that already contains duplicate sample names.
Both scripts are available via the clarity-migrator.jar tool.
If your database contains projects in which duplicate sample names exist, run the CleanupDuplicatedSampleNamesPerProjectStep script to clean up sample names that would violate the sample uniqueness constraint property. The cleanup script searches the database for sample names that are not unique and renames them.
The cleanup script also renames the corresponding original submitted sample name - since there is a one-to-one correspondence between submitted sample and derived sample names in the LIMS interface.
As the glsjboss user, change to the clarity-migrator directory:\
Run the clarity-migrator.jar tool, providing the name of the cleanup step as a parameter:\
The step will run and no validation errors should be reported.
Once cleanup has been performed successfully, you can apply the sample uniqueness constraint.
After you have cleaned up the database (if this step was required), you can apply the uniqueness constraint.
Applying the sample uniqueness constraint results in a change at the LIMS database / schema level. Once you have applied this change, there is no script available to revert it.
If you need to remove the uniqueness constraint, you will need to submit a request to the Illumina Support team.
To enforce sample uniqueness:
As the glsjboss user, change to the clarity-migrator directory:\
Run the clarity-migrator.jar tool, providing the name of the uniqueness constraint step as a parameter:\
The step will run and no validation errors should be reported.
After enforcing sample uniqueness, if a user attempts to accession or update sample names that already exist in the project - via the user interface or the API - an error message displays. The message describes the problem and advises the user to rename the duplicate samples.
The following sections describe and illustrate what happens if an accessioned or updated sample name conflicts with an existing sample name within the same project.
If an accessioned or updated sample name conflicts with an existing sample name within the same project:
Upload/modify via a sample sheet will result in the error shown below.
The sample with duplicate name is named within the parenthesis of the Detailed error message.
The quoted string in the Detailed error is the database name of the constraint being violated (uk_sample_name_per_project = unique key on sample table for name and project).
Sample management accession/modify will result in the error shown below.
If a user attempts to accession/modify a sample name under similar circumstances via an API operation, the results received would be similar to the content of this error message.
As of Clarity LIMS v6.3, it is possible to integrate with Illumina Connected Software Platform (ICP). This is available as part of Clarity LIMS Enterprise Software for hosted instances.
The Clarity LIMS ICP solution allows for the following features:
Single Sign-On (SSO) authentication to Clarity LIMS and LabLink when you are already logged into Illumina Connected Software Platform.
Unidirectional synchronization of user information (such as first name, last name and title) from Illumina Connected Software Platform to Clarity LIMS.
Automated unidirectional provisioning of user accounts from Illumina Connected Software Platform to Clarity LIMS. The email of the ICP user will be used by Clarity LIMS to determine if a new user needs to be provisioned upon success login.
If you wish to access the default Clarity login page while ICS is enabled, use the URL https://{SERVER_DOMAIN}/clarity/login/auth/?default=1.
Must have a Clarity LIMS purchase with a domain of https://<customer>.claritylims.com. Example https://reader.claritylims.com
Must be configured with a tenant administrator account for your Illumina Connected Software Platform enterprise domain.
NOTE: Users with Platform Services public account are not allowed to login via Clarity LIMS.
Access to your Clarity LIMS instance to configure the application properties. See
For details on ICP integration onboarding, contact Illumina Support Team.
Clarity LIMS leverages on ICP's user, password and session management capabilities. To allow users to access Clarity LIMS, they must also be granted the "Has Access" role for the Clarity LIMS product through the IAM console.
NOTE: Clarity LIMS Open API authentication for ICP user is not supported.
NOTE: ICP Integration is only supported on Clarity LIMS hosted environment.
By default, only the Administrator role has the SystemSettings:action permission.
To enable ICP integration with Clarity LIMS, a Clarity LIMS system administrator completes the following steps:
In Clarity LIMS, select System Settings on the top right menu bar.
On the system settings screen, select the Application Properties tab, then search for Platform.
Click Select All and update the following properties with the appropriate values.
User will be redirected to the 401 unauthorised error page if none of the default roles configured in platform.defaultRoles property contains the clarityLogin permission.
Once a ICP integration with Clarity LIMS is established, all changes to user profiles must be made from the Illumina Connected Software Platform service.
The email of the ICP user will be available to Clarity LIMS after successful log in with ICP.
Users are automatically created with a Clarity LIMS user account based on the platform.defaultLab and platform.defaultRoles configured when a user accesses Clarity LIMS via ICP for the first time.
If you have an existing Clarity LIMS user account, it will automatically be linked to your ICP user account based on the Clarity LIMS account's email address.
To synchronize user information from ICP to Clarity LIMS, a Clarity LIMS system administrator (and also a ICP tenant administrator) completes the following steps:
From Configuration, select the User Management tab.
Select the Users tab.
In the Users list, select Sync Platform.
Sync Platform is hidden by default if ICP has not been implemented.
There are two ways to unlink ICP provisioned users.
In Clarity LIMS, at the right of the menu bar, select your username and then select Profile.
The Profile page opens, displaying the details associated with your user profile.
On this page, click the Unlink Platform Account.
Click Continue in the pop up message to unlink from ICP account.
NOTE: You will be logged out of Clarity LIMS and redirect to the Clarity LIMS login page.
A Clarity LIMS system administrator is required to complete the following steps:
On the main menu, select Configuration.
On the configuration screen, select the User Management tab, then select Users.
The Users tab to see a list of all current active and archived users in the system, categorized by role.
Select the user to unlink ICP.
The details for the selected user display in the User Details area on the right. ICP users will have Unlink Platform Account enabled.
Click the Unlink Platform Account.
Click Continue in the pop up message to unlink from platform account.
After a ICP integration with Clarity LIMS is established, administrator must use Domain > Session Management in ICP to specify the period of time for which user's session should persist, after they have authenticated. Any session timeout configured using clarity.session.timeout and api.session.timeout in the property table of Clarity LIMS will no longer apply.
Clarity LIMS version | Authorization | JSESSIONID | Expected Result |
---|---|---|---|
-f | --files | Property files to process |
---|
If you use, or would like to use, ICP integration with Clarity LIMS, make sure that the global secret is configured using Secret Management Util. For details, see .
Property | Description |
---|
- This can be done by any ICP provisioned user.
- A Clarity LIMS system administrator role is required.
v5.2.x and later, and v4.3.x
Present
Present (Valid)
Open API does not perform the user authentication and responds with requested resources.
Present
Present (Invalid)
Open API performs the user authentication depending on whether the account is in the database or LDAP server, and responds with requested resources.
Absent
Present (Valid)
Open API does not perform the user authentication and responds with requested resources.
Absent
Present (Invalid)
Open API responds with HTTP Status 401 - Unauthorized.
Absent
Absent
Open API responds with HTTP Status 401 - Unauthorized.
Troubleshooting
During the restoration steps, if the migrate_claritylims_operationsinterface_database.sh script fails, try dropping and recreating the database, and then restoring from the backup (as in step 5 above) and rerunning the migration script (step 8).
For assistance with backing up and restoring your database, consult with your database administrator or IT group.
Name
Description
Required
-db
Path to tenant lookup jdbc.properties file
Yes
-date
All audit entries older than this date (yyyy-MM-dd) will be archived
Yes
-F
Fully Qualified Domain Name (FQDN) of the BaseSpace Clarity LIMS server (may be omitted if there is only 1 server)
No
-dir
Absolute path of the destination directory where the audit archive will be written. This directory must be writable by the Postgres user.
Yes
-U
Database superuser username. May be prompted for otherwise.
No
-P
Database superuser password. May be prompted for otherwise.
No
-f
Proceed without prompting for confirmation.
No
Note:
The vacuumdb operation is a database maintenance task that should be performed by a database administrator. Ideally, no backup or other database-specific job should be running while performing the operation. You may wish to include the operation in your routine schedule of database maintenance tasks.
-h | --help | Show usage information |
-u | --users | Usernames to check |
authentication.type | Sets to platformAuthentication |
platform.host | Configured the qualified URL of the target ICP domain. URL must starts with https:// Example: https://reader.login.illumina.com |
platform.domain | Sets the target domain of ICP. Example: reader |
platform.defaultLab | Sets the default Lab to be used when creating newly provisioned ICP users. It is set to Administrative lab by default. |
platform.defaultRoles | Sets the list of roles to be applied to provisioned ICP users. It is set to Labtech, Webclient by default. NOTE: Roles and permissions are managed within Clarity LIMS. |
Saving passwords in encrypted format is recommended. Use the omxprops to do this action.
Use the following command:
This command returns a text string resembling the following example:
Set the password by enclosing the text string in the ENC() wrapper.
Consider the following examples:
When setting an encrypted password in a configuration file:\
When using the property tool to set an encrypted password:\
Using ENC() is not needed when setting the password in Automation Worker:\
This constraint is set at the database level. Container Uniqueness can be turned ON by using the migrator tool, and running the optional migration step called "ContainerNameUniqueConstraints". This can be done by calling this command:
Notes:
Of course, edit the migrator.properties file to ensure that the mode is set to "migrate" not just "validate".
If the migrator has carried out its work successfully, the database should now have a new index present called 'unique_cnt_name'. (In psql, do \di to see indexes)
You do not need to restart Tomcat service in order for this change to take effect
Container Uniqueness can be turned OFF by running this SQL command:
This statement will fail if there are already any non-unique container names in the database. If it fails, you can find all non-unique names with this statement:
For each non-unique name found, you can iterate through all instances of it, and rename them so that they will be unique.
When you run the above query and determine there are too many containers to hand-edit, what now?
1) Are any of the containers that have non unique names in a depleted or discarded state? If so, you can probably delete them once the customer gives the all clear:
now, what do the values for the container stateid mean? You won't find them in the database, they are hard coded as follows
Armed with this knowledge, we can refine the query to highlight containers that may be deleted with little consequence:
Notes:
Containers can only be deleted if they are empty
SQL 'IN' statements normally have a limit of 255 records, so if the the sub-select - the one in parenthesis - returns more than 255 records, your mileage may vary
Clarity LIMS provides the ability to configure a step such that it requires sign-off by electronic signature (eSignature) before it can be completed.
Steps that have eSignature enabled display an eSignature enforcement button in the Record Details screen, and require valid eSignature credentials (username and password) to be entered.
Next Steps cannot be viewed until these credentials have been entered with eSignature signing permission.
Until the step has been completed, any changes made to the step will again require an eSignature sign off.
All eSignature events, successful or not, are recorded with the step and in the audit trail.
Permission to sign an eSignature is a role-based permission. For details, see Configured Role-Based Permissions.
By default, users cannot sign off on their own work.
eSignature enforcement is achieved by configuring the eSignature.Enabled and eSignature.RequiresDifferentReviewer properties, using the omxprops-ConfigTool.jar tool at:
Description: Enables/disables eSignature for step execution
Default value: false
Result: By default, eSignature enforcement on step execution is not enabled.
Description: Determines whether or not the eSignature must be provided by someone other than the user executing the step.
Default value: true
Result: By default, if eSignature.Enabled is set to "true" the eSignature must be provided by someone other than the user executing the step.
To change the default settings for eSignature enforcement, contact the Illumina Support team for assistance.
When eSignature is globally enabled, you can enable/disable eSignature requirement for any step.
This setting is available on the Record Details milestone in master step/step configuration (accessible from the Lab Work configuration screen). For details, see #configure-record-details-milestone.
By default, eSignature is enabled for steps.
When eSignature is enabled for a step:
The Record Details screen shows the eSignature button and the Next Steps button is disabled.
Selecting eSignature opens a dialog that requires valid eSignature credentials to be entered. Before proceeding to the next step, another user with the permission to sign eSignatures must sign in to the system and sign the eSignature.
After valid eSignature credentials have been entered, the Next Steps button is enabled. Hovering over the eSignature button displays a tooltip showing who signed the step and when.
If running Config Slicer v3.0.x against a configuration package or manifest file that was generated with a pre-3.0 version of Config Slicer, an error message displays.
In this scenario the error message will resemble the following:
SOLUTION:
Upgrade the configuration package file by running upgrade-config-slice.jar against it.
This jar file should be located in the same directory as Config Slicer (/tools/config-slicer).
The upgrade script will save an upgraded copy of the configuration package file (to the same directory), which can be inspected and imported.
Example:
To upgrade a manifest file at the same time, simply add it to the script as a second argument:
In this scenario the error message resembles the following:
SOLUTION:
This upgrade process is a little more involved:
Extract the list of process types from the manifest file .
Format them as parameters to Config Slicer's custom manifest generation for process types.
For example, assuming we extracted the process types Process Type 1 and Process Type 2 in step 1, the following command would be used:
Run the custom manifest generation.
Take the list of analyte UDFs and ResultFile UDTs from the manifest file that was generated and add them to the old manifest file .
Add the following line of text to the top of the old manifest file:
This section describes the steps for removing projects, samples, workflows, protocols, steps, and other artifacts that were created during training but are not needed for production, from the system.
After initial user training, but before the lab starts to use Clarity LIMS in production, a database cleanup is recommended.
This process removes any projects, samples, workflows, protocols, steps, and other artifacts from the system that were created during training but are not needed for production.
Contact the Clarity LIMS Support team to schedule a time for this cleanup procedure.
In preparation for the clean-up, complete the following steps:
Delete all unwanted custom fields and master steps.
Set the status of all unwanted workflows to Archived. If there is an Archived workflow that you would like to keep, temporarily set its status to Active. Note the following:
Protocols that are part of an Archived workflow, or set of Archived workflows will be deleted.
Protocols that belong to an Active or Pending workflow, in addition to an Archived workflow, will not be deleted.
After these steps are completed, a Illumina Technical Support helps to perform the cleanup procedure. This procedure takes approximately 15–20 minutes to complete.
Use a configuration package file to copy a configuration set from one server to another or to back up a particular working configuration at a particular time.
Required steps:
Create a configuration manifest file.
Export to configuration package file.
This process involves copying a configuration set from one server to another, by importing a configuration package file.
For example, to move a configuration set to a different environment for testing or troubleshooting purposes, or copy a new configuration set (created and tested on one system) onto another system.
Required steps:
On the source server, create a configuration manifest file, and then export to configuration package file.
On the destination server, import the configuration package file.
There are two approaches:
Comparing configuration manifest files provides a way to determine if there are processes or UDFs missing from a system. The information in the manifest files only allows comparing process and UDF names, not the specific way in which a process or UDF is configured.
Comparing configuration package files helps check how specific processes are configured. If the systems being compared are meant to be identical, this method is more appropriate to use.
Required steps:
On each system, create a configuration manifest file, or a configuration package file.
Run a diff comparison on the two files.
Edit the broken manifest file, export it, and import the resulting configuration package file into the system to add the missing entities.
There are several tools that available to compare files:
Meld (graphical), for Linux, port to MacOS
Standard Unix diff (Linux, MacOS) (use -q for a quick check).
FileMerge (OSX with XCode installed) - /Developer/Applications/Utilities/FileMerge.app
Combine configuration sets from multiple systems, merge them into a single configuration package file, and then import the file into a new system.
Required steps:
On each source server, create and edit a configuration manifest file.
Merge the entities from all files into a single manifest file.
Export the resulting file to a configuration package file.
On the destination server, import the merged configuration package file.
Copy a configuration set to restore it on another server and use it for testing/troubleshooting purposes.
Required steps:
On the server containing the 'broken' source system, create a full manifest file, containing all of the LIMS system configuration.
Export the manifest to a configuration package file. Save file to media/disk.
On the target server, import the configuration package file created on the source system.
To upgrade or add to a configuration set already installed on a server, two configuration package files are needed: one to back up the working configuration set and one containing the new updated configuration that have been created on a test server.
Required steps:
On the server you want to upgrade, create a full manifest file and export this to a configuration package file. Save this file as a backup.
On the test server, create a manifest file and edit it so that it only includes the entities you want to import.
On the server you want to upgrade, import the configuration package file.
You may want to take a configuration that has been created and tested on one system/site (referred to as the source system in the steps below), and deploy it on another system/site (destination system).
Take a configuration that has been created and tested on one system/site (referred to as the source system in the following steps) and deploy it on another system/site (destination system).
Required steps:
On the source system, create a configuration package file containing the tested configuration to import.
On the destination system, create a full manifest file and export this to a configuration package file. Save this file as a backup.
On the destination system, import the configuration package file that was created in step 1.
Refer to #import-export-mode and #guidelines-and-semantics-for-creating-manifest-files.
Option | Description |
---|---|
* The importAndOverwrite option lets Config Slicer update existing configuration, rather than create new configuration.
Created and validate a configuration set on the source server.
Have access to the Config Slicer tool and the libs subdirectory.
Create Simple manifest file or Custom manifest file.
Simple manifest file:
On the source server, copy and paste following code to the command line. Edit the variables (version, server IP address, username, password, and manifest file name) to match those in your system.
A command with the variables filled in might look like this:
This step produces a manifest file containing information about the entire system configuration. For best practice, copy this file and rename the copy in a way that reflects the configuration (we'll use newconfiguration.txt for this example). Use the copied file for the next steps.
A manifest file is used as an intermediary step to produce an XML configuration package file.
The manifest file is only relevant to the data that exists in the system at the time it is created. Discard it after creating the configuration package file or save the manifest file for historical auditing purposes. It can provide a record of a known working configuration set on a particular system.
Custom manifest file:
To create a manifest file for only specific workflows, protocols, or process types, follow the steps outlined previously, using -o custom instead of -o example.
When using this operation provide additional parameters (-w, -pr, and/or -pt) specify the exact entities for which to create a manifest.
For example:
Specifying every option is not required. It is also possible to specify more than one of each kind. For example, create a manifest file for a workflow with the following command:
Or for two protocols like so:
Or for two process types and a protocol with this command:
The next step is to edit the manifest file, removing unnecessary information and preserving only the custom configuration to import into the destination system.
Example 1: In this example, everything is deleted from the manifest file, except for the two new process types to export.
Example 2: In this example, the manifest file contains definitions for some new reagent types:
Copy and paste the following code onto the command line. Edit the variables (version, server IP address, username, password, and manifest and package file names) as required.
An edited command might look like the following example:
This step generates a data file in an XML format (newconfiguration.xml in our example) that is compliant with the Rapid Scripting API.
A configuration package file has been exported. This example uses a file named newconfiguration.xml.
Access to the Config Slicer tool on the destination server has been granted.
There are no in progress steps for any of the protocols that are going to be sliced in, otherwise the import of the protocol fails.
On the destination server, copy and paste the following code to the command line. Edit the variables (version, package file name, server IP address, username, and password) as required.
A command with the variables filled in might look like this:
If any of the configuration entities that are about to import, exist in the destination system, Config Slicer either logs a warning or attempts to update them. It depends on the mode being run (see #import-export-mode).
If Clarity LIMS has maintained an internal record of deleted items, the previous information may also apply to deleted entities. This situation may occur if those entities have created outputs that currently still exist in the system.
When running in import mode, entities that exist and are different from the version in the package have a warning and full diff logged.
When running in importAndOverwrite mode, Config Slicer attempts to update entities that exist and are different from the version in the package.
In this scenario, back up configuration package containing copies of the updated entities before they were changed is saved to the directory where the configuration package is located. If that directory is not writable, the backup package is saved to the same directory as the log file.
If the version in the package is identical to the version on the server, no errors are logged and Config Slicer considers that entity successfully imported.
To avoid changing existing configuration (which could possibly break historical data), another option is manually renaming the old entities. Add an extension or a prefix and continue with importing the new configuration package.
Use the following methods to validate whether an import has completed successfully:
Check the Import Log:
For each specific type of configuration that is being imported (e.g. container types, process types, workflows), Config Slicer will log a set of messages. The messages look similar to the following examples:
Before it begins to process a specific entity, the file logs how many entities were found. Any errors or warnings about this set of entities always appear between Found 4 $Entities and Summary of $Entities.
Every entity that is found in the configuration package always appears in the summary, in one category or another. If a scenario occurs where this isn't true, or where the initial count of entities does not match the number in the summary, something has gone wrong and a bug report should be filed.
Validate with Config Slicer:
Running Config Slicer with the validate operation checks every entity in the package to see if it exists on the destination server. It reports results in a format similar to the log format shown previously.
Run the validate operation before or after importing:
Before importing—checks if there could be any problems when importing a configuration from a package. This feature is its primary use as it makes sure that during import, "configuration exists in package but not on server" is not considered an error case.
After importing—makes sure that the results are what was expected.
Example of validate output:
Check Configuration on the Destination Server:
The ultimate test of whether configuration has imported successfully is to check the configuration on the destination server itself. Make sure it looks and behaves as expected.
Configuration can be checked either via the Configuration screen in the Clarity LIMS user interface, or via the configuration endpoints in the API.
To be included in a configuration package file, the top-level entities of the custom configuration set must be explicitly enumerated.
Some of the top-level entities are discrete 'self-contained' units, and do not include other units (for example, container types, reagent types, artifact groups, and non-artifact/non-process type UDFs).
Some top-level entities (process types, for example) automatically include other units (refer to #non-top-level-entities for more information).
For process types, only configured processes, vanilla Transfer processes, Pool Samples (since 7.5), and Add Multiple Reagents (since 7.6) processes are supported. All process type details are exportable/importable.
Some entities are only included as part of other entities.
For example, process templates, process type UDFs, and artifact UDFs are only included when included in a top-level process type. (The latter is a special case, given that the same artifact UDF can be used by multiple process types.)
Some entities may be required by other entities. In these cases, make sure that these entities are exported/imported in the correct order.
For example, because process types may require the existence of a container type, create the container type first.
Required entities are not automatically included. If they do not exist in the destination system, explicitly include them in the manifest file.
For example, suppose that a process type declares a particular container type as a default output plate. If that container type does not exist in the destination system, include that container type in the manifest file.
Import modes affect the transactability of the tool, allowing it to make incremental changes if errors occur or provide an all-or-none option. For example, use the operation validate mode to determine if errors were encountered.
This is the default import mode.
This mode attempts to import as many units as possible. Any failures are logged, but the import operation is not interrupted.
For example, a failing container type import will not prevent other container types from being processed.
Similarly, if a process type fails import because it already exists, any UDFs and process templates for that process type will still be processed.
There is no need to enter an option for this mode.
If this mode is used, the import operation is aborted if it encounters a failure.
For example, if there is an API version mismatch, the operation will abort and no further imports will be executed. Note that any changes already performed are not reverted.
Use the -Strict option to enable this import mode.
Use the validate operation (instead of import) to enable this mode.
This mode produces a report listing showing the following items:
Entities that would be successfully imported because they do not exist on the target server.
Entities that already exist on the target server and are identical.
Entities that already exist on the target server but are different.
Validate mode can only detect a limited set of errors. For example, it can check if a particular piece of configuration already exists. If so, it checks if it is identical to the one included in the configuration package.
This information can help determine if the importAndOverwrite option is needed instead. For details, see #command-line-options-and-usage.
Example of console output:
Use this mode to generate a manifest file if the configuration you want to export is not tied to a specific set of workflows, protocols, or process types.
Use the example operation (instead of import) to enable this mode.
The Config Slicer tool is used to move small incremental configuration changes, contained in a configuration set, between Clarity LIMS systems. For example, it moves changes between a test system and a production system.
This configuration tool provides granular export/import functionality that allows the management of configurations that support experimental workflows.
Use this tool to back up, copy, deploy, and restore configuration sets. By making small incremental changes, make sure that the modifications made to the production system are minimal.
Review the following key concepts:
Configuration set—This item may be created by the Illumina Support team or by the customer. It comprises the items (know as entities) that are added to a Clarity LIMS system to allow for customization for a particular scientific experiment or workflow. The Illumina NGS Extensions Package is a good example. See #supported-entities.
Configuration manifest file—This text file determines the configuration set to be exported from a system. The manifest file does not contain the actual configuration data. It only drives the extraction of configuration from a system.
Configuration package file—This XML file contains the top-level entities selected by the configuration manifest file, plus any related child entities. For example, for process types, it includes process type UDFs, process templates (protocols), and output UDFs.
The Config Slicer tool uses an export/import process to transfer configuration sets from a source to a destination server. This process breaks down into the following tasks:
On the source Clarity LIMS server, use the Config Slicer tool to create a configuration manifest file.
Edit the manifest file so that only the required custom configuration set is preserved.
Use the Config Slicer tool to export the edited manifest file to a configuration package file.
On the destination Clarity LIMS server, use the Config Slicer tool to import the configuration package file into the system.
A configuration package file can be imported into multiple systems. Use this feature to create and import multiple custom configuration sets, such as the Illumina TruSeq integration. This functionality also provides the Illumina Support team with a scalable way to keep up with constantly changing protocols.
A configuration set comprises the entities added to a Clarity LIMS system that allow for customization of the system for a particular scientific experiment or workflow. The following entities are currently supported by the Config Slicer tool:
Sample UDFs and UDTs
Container UDFs and UDTs
Project UDFs and UDTs
Artifact groups (experiments)
Reagent types
Control types
Reagent kits
Process types (any configured processes – for example, Pool Samples and Add Multiple Reagents)
Process UDFs and UDTs
Output UDFs
Process templates - UDFs, UDTs, and parameter strings only (other entities such as instruments and researchers are not supported)
Protocols
Workflows
When performing custom manifest generation by workflow, protocol, or process type, the following entities are not exported: Sample, Container, Project UDTs, and UDFs for Project. Account (Lab) and Client (Researcher) UDFs are never exported by config slicer. These are known issues.
Config Slicer does not export/import nonstep automation, nor does it preserve the order of protocols.
When working with Config Slicer on the Clarity LIMS application server, there are no additional prerequisites. The latest version of the config-slicer.jar file is installed as part of Clarity LIMS on the Clarity LIMS server. In a default installation, find the file in the following location:
To work with Config Slicer on a machine other than the Clarity LIMS server, do the following:
Make sure that Java is installed.
Copy the /opt/gls/clarity/tools/config-slicer directory, and its contents, from the Clarity LIMS server to the machine. The config-slicer directory and the config-slicer package should contain the following:
config-slicer-<version>.jar
libs subdirectory (which includes all the libraries referenced by config-slicer-<version>.jar, including groovy-all-2.4.8.jar)
upgrade-config-slice.jar
Use the following steps to help troubleshoot the installed Automation Worker framework. A flowchart is provided as a reference.
The first step is to check the connection between the Clarity LIMS server and the Automation Worker node.
Use the -n option of the ai-monitor.jar tool script to see if the Clarity LIMSserver is currently able to communicate with the AI node.
To check the status of ai-monitor.jar:
As the glsjboss user open a SSH session to the Clarity LIMS server.
Run the following command:
If the Clarity LIMS server cannot connect to any of the AI nodes the response will be as follows:
In this scenario, proceed to Step 2. Verify Windows Service or Linux Daemon.
If the Clarity LIMS server can connect to the Automation Worker nodes, the response will resemble the following:
Determine if the Windows service or Linux daemon for the Automation Worker is running.
To start, stop, or restart the Windows service:
From the Start menu, select Run.
In the Open text field, type ‘services.msc’ and select OK.
In the Services dialog, locate the Automation Worker service.
Right-click the service and select Start, Stop, or Restart. If the service is stopped, start the service.
If the service is running, stop and start it again.
Wait for a minimum of three minutes, and then check if the AI node is communicating with the Clarity LIMS server by running the ai-manager.sh
script with the status argument, as described in Step 1.
To start or stop the Automation Worker Linux daemon:
To verify current status:
To restart a running daemon:
To stop a running daemon:
To start a stopped daemon:
Once Started/Restarted:
Wait for a minimum of three minutes, and then check if the AI node is communicating with the Clarity LIMS server by running the ai-manager.sh script with the status argument, as described in Step 1.
If the daemon is not recognized, list out the contents of the /etc/init.d directory and determine the exact name of the Automation Worker daemon.
The name typically contains 'automation_worker', but may vary—particularly if there is more than one daemon on the same Linux server, or if the Automation Worker is installed on a server other than the Clarity LIMS application server.
Automation Worker creates history and log files and stores them on laboratory computers in the logs folder of the Automation Worker installation directory.
3.1 Reviewing Automation Worker log files
After performing the steps described above, reviewing these log files may help to determine the cause of the issue.
3.2 Turning on debug logging
After reviewing the log files, if the cause of the issue is not evident, the next stage is to turn on debug logging. This outputs DEBUG messages to the log files.
Contact the Clarity LIMSsupport team for instructions on turning on DEBUG mode.
Review the log files to determine if the DEBUG messages help to find resolution.
After turning on debug logging, ensure that you restart the Windows service or Linux daemon.
All Clarity LIMS installations include the installation of a separate component known as an Automation Worker (AW) node (formerly known as Automated Informatics (AI) node).
When writing automation-based triggers, the code invoked by an automation runs on the AW node.
While the AW node is a critical component, it typically does not require much attention. However, there are many other options that must be considered. These options include how many AW nodes a Clarity LIMS system uses, and where they are placed. This section discusses some of these options.
The AW node is installed adjacent to the Clarity LIMS application. Its original purpose was to enable remote computing.
To illustrate these features, assume that a need requires Clarity LIMS to produce a file in a specific location. The file is then processed and an action occurs.
A good example is label printing via the BarTender application. The BarTender application picks up the new file, associates it with a printer and a template, and causes a label to be printed.
The infrastructure for this file storage and processing likely occurs on a separate server from the one that contains the Clarity LIMS application and the AW node.
The Clarity LIMS application invokes the command to create the file on the AW node. After the file is in the file store, it is processed by the file processing application.
How does the AW node get the file to the file store, even if it is on a different server and possibly on a different network? The solution is to install an AW node on the external server.
The addition of the second AW node, which is local to the external server but remote as far as Clarity LIMS is concerned, provides a solution to the problem. This demonstrates how AW nodes can support remote computing..
Clarity LIMS invokes the production of the file via the remote AW node.
The remote AW node copies the file to the local file store.
The file processing application processes the file.
An AW node is a Java-based application and can run on most PCs/servers.
The AW node and the Clarity LIMS application must be able to communicate through networking firewalls.
When the Clarity LIMS application has the choice of sending the task to multiple AW nodes, the channel name property is used to determine which one receives the job. For example, the AW node installed on the Clarity LIMS server has the default channel name of limsserver. This is why you must specify the limsserver value when defining an automation command.
When defining the automation, the following two items are defined:
Which command should be run by the AW node.
Which AW node should receive the job via the channel name property.
Typically, for the AW node that runs on the Clarity LIMS server, the convention is to place scripts in the /opt/gls/clarity/customextensions
folder. The log file is stored in /opt/gls/clarity/automation_worker/node/logs/
.
For the remote AW node, store the scripts in any folder, and choose where its log file gets stored (it is running on your hardware).
For the external AW node to run Clarity LIMS toolkits, such as the Lab Logic or Lab Instrument toolkit, make copies of the JAR files that contain these toolkits. Place them on the external server, so the AW node can access them.
There is no real limit to how many AW nodes you can have. Place them wherever they are needed.
Consider the AW node that is installed on the Clarity LIMS server for a cloud-based implementation.
Although cloud-based Clarity LIMS servers contain an AW node, the best practice is not to run your code on it.
Why not? It is a question of security policies for both Illumina and our cloud-hosting provider. If we provide customers with command-line access to the AW node, we are allowing them command-line access to the Clarity LIMS server and, as a consequence, the Clarity LIMS application itself. If using Clarity LIMS in a clinical environment, this makes it more difficult to pass security and access audits.
If the Clarity LIMS instance is cloud-hosted, and you need to run custom code via an AW node, there is a solution.
You could install an AW node on your local architecture and have it interact seamlessly with Clarity LIMS, as illustrated in Figure 1. You can control access to the remote AW node, the infrastructure of the Clarity LIMS server is safely hidden behind firewalls, and security policies remain intact.
However, for some customers, part of the attraction of a cloud-based system is not having to maintain mission-critical hardware. To address this, we can offer an external AW node that does not live on local hardware but is also in the cloud.
Thus, we can provide an external AW node that lives on a separate machine, known as the Automation Worker host. This Software-as-a-Service (SaaS) model gives access to only those parts of the system that require it. You can access the Automation Worker host, and its AW node can interact with Clarity LIMS. However, you cannot access the Clarity LIMS server.
Because the AW node is running on a separate machine to the Clarity LIMS server, it needs its own copy of the toolkit JAR files. For some, this feature has an additional bonus in that the Automation Worker host hardware supports Python 3 for scripting.
For customers who are cloud-based and need a true local AW node, this is not a problem. They can have an AW node in the Automation Worker host and as many local AW nodes as needed. Place them wherever they are needed.
In Clarity LIMS v5.4 and later, you can install the Automation Worker Node onto the Windows server. Before you begin the installation, make sure that you have met the following requirements:
The clarity-aiinstaller-x-deployment-bundle.zip file must be retrieved from the server where Clarity LIMS is installed. You can find this file at /opt/gls/clarity/config/.templates/automation_worker/
.
For Windows 10 users, the command prompt must be specified in the Automation command line (eg, cmd.exe / c echo 'Hello World'
).
If VISTA-SETUP.bat does not display in the installation window after Run as Administrator is selected, start the installation window as follows.
Launch the command prompt as an Administrator.
Change the directory where VISTA-SETUP.bat
is located with cd C:<DIRECTORY>
.
Execute the java -jar .\GLSAutomatedInformatics-Installer.jar
command.
Install the Automated Worker Node as follows.
Copy the SecretUtil deployment bundle ZIP file to the remote Automation Worker node.
Extract the contents of the ZIP file to a folder named secretutil. You can add this folder to C:\opt\gls\clarity\tools
or another location you choose.
Edit vault.properties of the file in the conf folder to update application.mode to file.
Make sure that the following System Environment Variables are set:
CLARITYSECRET_HOME (eg, C:\opt\gls\clarity\tools\secretutil
)
CLARITYSECRET_ENCRYPTION_KEY (minimum 24 characters)
Using secretutil.jar, set the required secrets. For a basic installation of AutomationWorker, you must set the passwords for apiuser and glsftp using the following commands:
# For glsftp
java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar -u=<secret> app.ftp.password
# For apiuser
java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar -u=<secret> -n=integration apiusers\<username of the API user, e.g. apiuser>
After setting the secrets, attempt to retrieve them with the following command:
java -jar C:\opt\gls\clarity\tools\secretutil\secretutil.jar app.ftp.password
Restart the Automation Worker service.
The System Settings screen allows for viewing and managing of certain operations: Clarity LIMS system configurations, log files export and whitelisting, that are previously possible only through using SSH command or CLI scripts.
To access the System Settings screen, the SystemSettings:action permission is required. Without this permission, the System Settings is not visible.
By default, only the administrator role has the SystemSettings:action permission. For more on user roles and permissions, see and .
In Clarity LIMS, select System Settings on the top right menu bar to access the screen. Use this screen to do the create and manage the following:
Applications Properties allows for management of Clarity LIMS system configuration that is stored in the database.
You may view, create, modify, delete application properties.
Banner allows for management of custom announcement in Clarity LIMS. This banner shows up at the top of Clarity LMS page for all users when configured.
You can use Export Logs to retrieve the following log file generated by the various Clarity LIMS services for debugging purposes.
Automation Worker
Tomcat UI
Tomcat API
Elasticsearch
HTTPD Access Log
Search Indexer
To Export Logs:
On the system settings screen, select the Export Logs tab.
Select the log to be exported.
Select Export.
Global Tokens setting allows you to create and manage a list of user-defined tokens to be used in automations.
IP Whitelisting allows you to request for access to specific ports from the whitelisted IP.
IP whitelisting is available only on Illumina Hosted environments.
You can request for access to the following three access types:
Clarity Access Type: Port 80, 443
SSH Access Type: Port 22
Database Access Type: Port 5432
NOTE: For Database Access Type request, PostgreSQL config file (pg_hba.conf) need to be updated to allow the access to the database, in addition to submitting an IP whitelisting request. Contact Illumina Support Team to request for the read-only access credentials.
An IP whitelisting request will takes less than 30min to complete on the server. If there are other pending IP requests in the server (eg, N number of requesting), the time to complete the new request should be less than (N + 1) * 15 * 2 minutes.
Example
1 new IP whitelist request on a system with 5 pending IP whitelist requests will take less than 180 minutes to complete.
The status of an IP whitelisting may be Requesting, Active, Failed, Delisting, Delist Failed or Archived. You can view and manage the list of IP requests records in IP Requests area.
The following table provides an overview of each IP request status:
NOTE: IP addresses with Failed status will be removed from IP Request table after 30 days.
Roles and Permissions management that allows you to create, modify, and delete roles and the associated permissions to a role.
SSH Request that allows you to request for SSH access via user public key.
SSH Request is available only on Illumina Hosted environments.
The status of SSH Request may be Requesting, Active, Request Failed, or Archived. You can view and manage the list of SSH request records in SSH Listings area.
The following table provides an overview of each SSH request status:
NOTE: SSH access with Request Failed and Archived status will be removed from SSH listing table after 30 days.
If there is an API version mismatch, Config Slicer will log a message at the beginning of an import:
Generally, this message is not a cause for concern.
However, if there are warnings about configuration differences during the import, changes that have been made to the API in between the package version and the server version may be responsible.
In addition, if the package being imported is from v4.x, a log message may appear at the beginning of an import:
Detected package is from a pre-5.0 Clarity system. The package will be upgraded to match new configuration requirements.
In this case, the package is upgraded and the resulting configuration is output to a folder in the same location as the package being imported. This upgraded configuration package is the v5.x representation of the v4.x configuration and can be used directly to troubleshoot error occurring on import. The updated packaged can also be imported directly.
The following changes have been included in the latest release of Config Slicer (v 3.0.51):
The entity summary is now shown after the individual differences have written to the log file, as opposed to before these differences.
Support was dropped for the following process type attributes which are no longer used:
SupportsExternalProgram,
ShowInExplorer,
ShowInButtonBar,
OpenPostProcess,
IconConstant
Support was dropped for the show-in-tables
property on Custom Fields (UDFs) because it is no longer used.
There is now support for the new Clarity LIMS 5.0 distinction between Protocol Step names and Master Step (ProcessType) names.
It is now possible to slice in and out all the new Master Step settings added in Clarity 5.0, including:
Default Process Template
Instrument Types
Container Types
Reagent Kits
Reagent Types
Control Types
Sample Fields
Queue Fields
Ice Bucket Fields
Step Fields
Step Properties
When importing/validating a slice from 4.x into 5.x, if the package contains any Master Steps (process types) or Protocols, it is upgraded to be compatible with the new configuration available in 5.x. This process is automatic, and the updated package is written out to the directory containing the package being imported or the directory that log file is being written to. If errors occur while importing, this updated package can be manipulated directly and imported to fix them.
Note: If upgrading the package fails, import/validation will fail. In general, this will reflect a mistake in the configuration package being imported.
The following changes were specifically added to support slicing between Clarity LIMS 4.x and BaseSpace Clarity LIMS 5.0. All of these changes will be saved into a new configuration package that will be written to the same location as the package being sliced:
Backwards compatibility for Protocol Step setup configuration.
Support for updates of parent entities after both the parent and child entities have been imported.
This change is required for updating the defaultProcessTemplate and step-fields step properties on Master Steps. Both require that the Master Step (ProcessType), ProcessTemplate, and Master Step Custom Fields (ProcessType UDFs) exist before they can be set on the Master Step.
Support for setting the qcProtocolStep flag on Master Steps, allowing the correct Master Step type to be displayed in the Lab Work Configuration UI. The setting is propagated up from the Protocol Step to the Master Step.
After slicing, it is recommended to examine the configuration closely for any QC Protocol Steps that previously shared a Master Step with nonQC Protocol Steps. This setting may not transfer as expected and there is potential for misconfigured Protocol Steps. In particular, if the Master Step produces measurements, and even just one child step of a Master Step has qcProtocolStep=true, then the Master Step will get qcProtocolStep=true set on it. Thus, all other steps that use that Master Step have qcProtocolStep=true, whether or not it was set before.
Support for setting the default container on Steps and Master Steps in such a way that all behaviour is maintained from 4.2
If every child step of a Master Step has the default container that was defined as a permitted container on the master (through the 'OutputContainerType' process-type-attribute
) then the default will be added as a permitted container on the master step
If any child does not have the default container from the Master Step, then the default is removed from the Master and each child that had the container is updated so that the default is the first permitted container (and hence default)
Extra containers and any step properties that are no longer valid on a step will be migrated to new properties or removed.
Step-setup file configuration has been moved to the Master Step and it not possible to have a different set of step-setup files on a step than are specified on the Master. The master step owns the list of files. Both the search-result-file-index attribute on each file element and the message element for each file are defined by the Master and must be duplicated on every step. When moving a 4.x configuration with step-setup files to a 5.x configuration, the following events will occur:
The set of all the step-setup files found on all the child steps in the slice are added to to the Master Step and each child step.
If more than one step in the slice defines step-setup files for the same search-result-file-index, then all the messages for that search-result-file-index will be concatenated by newlines.
The enabled
attribute will be set to true for the step-setup on all child steps.
The locked
attribute will be set to false for the step-setup on all child steps.
In the case of importAndOverwrite
, the step-setup of any existing child steps for an overwritten master step will be included.
Upon validation after import, the following differences have been reconciled:
UDFs that differ by STYLE only
missing defaultProcessTemplate
missing attemptAutoPlacement
missing autoAttachFiles
missing qcWithPlacement
BaseSpace Clarity LIMS provides Audit Trail, a robust data-tracking system that allows you track:
All user activity - i.e. who did what and when.
Every action that is written to the database.
Audit Trail has two capture systems, Event Log and Detail Log.
Event Log records familiar BaseSpace Clarity LIMS user actions, and presents this information in a format that is easy to read and understand.
Detail Log records exacting information about the changes resulting from the actions recorded in the Event Log. This includes both the updated values and the previous values.
All of the following steps are performed on the BaseSpace Clarity LIMS server.
With the exception of steps 1 and 8, all are performed as the glsjboss user.
Stop Tomcat - as the glsjboss user:
Access the property tool:
Enable Audit Trail:
Confirm the setting:
Migrate the database:
Start Tomcat - as the glsjboss user:
Log in to BaseSpace Clarity LIMS and make a change – for example, add a project, create a sample, add a custom field, etc. The exact change is not important. There must simply be a change to the activity or the configuration data since Audit Trail was last enabled.
Use psql (postgres) or SQL*Plus (Oracle) to query the main BaseSpace Clarity LIMS database (clarityDB) to verify whether rows are being written to the following two tables:
auditeventlog
auditchangelog
All of the following steps are performed on the BaseSpace Clarity LIMS server.
With the exception of steps 1 and 8, all are performed as the glsjboss user.
Stop Tomcat - as the glsjboss user:
Access the property tool:
Disable Audit Trail:
Confirm the setting:
Migrate the database:
Start Tomcat - as the glsjboss user:
WinMerge (graphical), for Windows -
As a best practice, make sure that the configuration is backed up by creating a full manifest file and exporting to a configuration package file (see Step 2). The process is also described in .
For details on the Automation Worker log files, and instructions on how to view them, refer to .
Export Logs is available only on Illumina Hosted environments. To export log files for On-premise environment, see .
Status | Description |
---|
Status | Description |
---|
-a,--apiuri <apiuri>
The BaseSpace Clarity LIMS REST API base URI (ends in "/api") (Either this or --server must be provided)
-k,--package <package file>
File to be imported from or exported to (Required if operation is import, importAndOverwrite*, export, or validate). If file is not local a full path is required.
-f,--force <force>*
Force update without prompt when running in importAndOverwrite mode (Optional)
-m,--manifest <manifest>
Manifest file (Required if operation is export or example). If file is not local a full path is required.
-o,--operation <operation>
The operation mode for the Config Slicer tool.
Options are import, export, validate, example, importAndOverwrite, and custom (Required)
-p,--password <password>
The BaseSpace Clarity LIMS REST API password, if encrypted, use "ENC(<encrypted-password>" (Required)
-pr,--protocols <protocols>
The protocols to include in the custom manifest (Optional)
-pt,--processTypes <processTypes>
The process types to include in the custom manifest (Optional)
-s,--server <server>
The BaseSpace Clarity LIMS REST API server (either this or --apiuri must be provided)
-S,--Strict
Strict mode for import (fail fast - default mode is best-effort) (Optional)
-u,--username <username>
The BaseSpace Clarity LIMS REST API username (Required)
-w,--workflows <workflows>
The workflows to include in the custom manifest (Optional)
Requesting | Request to whitelist IP address is being processed. All new request is defaulted to this status. |
Request Failed | IP request to whitelist IP address has failed. Contact Illumina Support with Reference ID provided in Failure column for further assistance. |
Active | IP address is whitelisted and access to port is enabled. |
Delisting | Request to delist IP address is being processed. |
Delist Failed | Request to delist IP address has failed and port continues to be accessible. Contact Illumina Support with Reference ID provided in Failure column for further assistance. |
Archived | IP address is removed from whitelisted list and can no longer access to port. |
Requesting | Request to enable SSH access is being processed. All new request is defaulted to this status. |
Request Failed | Request to enable SSH access has failed. Contact Illumina Support with Reference ID provided in Failure column for further assistance. |
Active | Request to enable SSH access is successful. Access is valid for 180 days, from date of request. |
Archived | Validity of access has expired. SSH access for user is removed. |
Note: As of BaseSpace Clarity LIMS v6, Audit Trail is enabled by default. When Audit Trail is enabled, you may experience a small performance hit due to the overhead of writing these entries to the database. It is therefore recommended that you periodically archive the Audit Trail database so that it does not become too large. |
For instructions, see the