Clarity LIMS
Illumina Connected Software
Clarity LIMS Software
Clarity LIMS Software
  • Announcements
  • Clarity LIMS
    • Clarity & LabLink
  • API and Database
    • API Portal
      • REST
        • REST General Concepts
        • REST Web Services
        • HTTP Response Codes and Errors
        • XML UTF-8 Character Encoding
        • Requesting API Version Information
        • Viewing Paginated List Resources
        • Filtering List Resources
        • Working with User-Defined Fields (UDF) and Types (UDT)
        • Traversing a Genealogy
        • Working with Batch Resources
      • Getting Started with API
        • Understanding API Terminology (LIMS v5 and later)
        • API-Based URIs (LIMS v4 and later)
        • Development Prerequisites
        • Structure of REST Resources
        • The Life Cycle of a Sample: Stages Versus Steps
        • Integrating Scripts
      • Automation
        • Automation Triggers and Command Line Calls
        • Automation Execution Environment
        • Supported Command Line Interpreters
        • Automation Channels
        • Error Handling
        • Automation Tokens
          • Derived Sample Automation Tokens
          • Step Automation Tokens
          • Project Automation Tokens
        • Automation Testing
        • Troubleshooting Automation
      • Tips and Tricks
        • Accessing Step UDFs from a different Step
        • Obfuscating Sensitive Data in Scripts
        • Integrating Clarity LIMS with Upstream Sample Accessioning Systems
        • Creating Samples and Projects via the API
        • Displaying Files From an Earlier Step
        • Transitioning Output Artifacts into the Next Step
        • Determining the Workflow(s) to Which a Sample is Assigned
        • Standardizing Sample Naming via the API
        • Copying UDF Values from Source to Destination
        • Updating Preset Value of a Step UDF through API
        • Automating BCL Conversion
        • Finding QC Flags in Aggregate QC (Library Validation) via REST API
        • Setting the Value of a QC Flag on an Artifact
        • Creating Notifications When Files are Added via LabLink
        • Remote HTTP Filestore Setup
      • Cookbook
        • Get Started with the Cookbook
          • Tips and Troubleshooting
          • Obtain and Use the REST API Utility Classes
        • Work with EPP/Automation and Files
          • Automation Trigger Configuration
          • Process Execution with EPP/Automation Support
        • Work with Submitted Samples
          • Adding Samples to the System
          • Renaming Samples
          • Assigning Samples to Workflows
          • Updating Sample Information
          • Show the Relationship Between Samples and Analyte Artifacts (Derived Samples)
        • Work with Containers
          • Add an Empty Container to the System
          • Find the Contents of a Well Location in a Container
          • Filter Containers by Name
        • Work with Derived Sample Automations
          • Remove Samples from Workflows
          • Requeue Samples
          • Rearray Samples
        • Work with Process/Step Outputs
          • Update UDF/Custom Field Values for a Derived Sample Output
          • Rename Derived Samples Using the API
          • Find the Container Location of a Derived Sample
          • Traverse a Pooled and Demultiplexed Sample History/Genealogy
          • View the Inputs and Outputs of a Process/Step
        • Work with Projects and Accounts
          • Remove Information from a Project
          • Add a New Project to the System with UDF/Custom Field Value
          • Get a Project Name
          • Find an Account Registered in the System
          • Update Contact (User and Client) Information
        • Work with Multiplexing
          • Find the Index Sequence for a Reagent Label
          • Demultiplexing
          • Pool Samples with Reagent Labels
          • Apply Reagent Labels with REST
          • Apply Reagent Labels When Samples are Imported
          • Apply Reagent Labels by Adding Reagents to Samples
        • Working with User Defined Fields/Custom Fields
          • About UDFs/Custom Fields and UDTs
          • Performing Post-Step Calculations with Custom Fields/UDFs
        • Work with Processes/Steps
          • Filter Processes by Date and Type
          • Find Terminal Processes/Steps
          • Run a Process/Step
          • Update UDF/Custom Field Information for a Process/Step
          • Work with the Steps Pooling Endpoint
        • Work with Batch Resources
          • Introduction to Batch Resources
          • Update UDF/Custom Field Information with Batch Operations
          • Retrieve Multiple Entities with a Single API Interaction
          • Select the Optimal Batch Size
        • Work with Files
          • Attach a File with REST and Python
          • Attach Files Located Outside the Default File Storage Repository
          • Attach a File to a File Placeholder with REST
        • Work with Controls
          • Automated Removal of Controls from a Workflow
      • Application Examples
        • Python API Library (glsapiutil.py) Location
        • Scripts That Help Automate Steps
          • Route Artifacts Based Off a Template File
          • Invoking bcl2fastq from BCL Conversion and Demultiplexing Step
          • Email Notifications
          • Finishing the Current Step and Starting the Next
          • Adding Downstream Samples to Additional Workflows
          • Advancing/Completing a Protocol Step via the API
          • Setting a Default Next Action
          • Automatic Placement of Samples Based on Input Plate Map (Multiple Plates)
          • Automatic Placement of Samples Based on Input Plate Map
          • Publishing Files to LabLink
          • Automatic Pooling Based on a Sample UDF/Custom Field
          • Completing a Step Programmatically
          • Automatic Sample Placement into Existing Containers
          • Routing Output Artifacts to Specific Workflows/Stages
          • Creating Multiple Containers / Types for Placement
          • Starting a Protocol Step via the API
          • Setting Quality Control Flags
          • Applying Indexing Patterns to Containers Automatically
          • Assignment of Sample Next Steps Based On a UDF
          • Parsing Metadata into UDFs (BCL Conversion and Demultiplexing)
        • Scripts That Validate Step Contents
          • Validating Process/Step Level UDFs
          • Checking That Containers Are Named Appropriately
          • Checking for Index Clashes Based on Index Sequence
          • Validating Illumina TruSeq Index Adapter Combinations
        • Scripts Triggered Outside of Workflows/Steps
          • Repurposing a Process to Upload Indexes
          • Adding Users in Bulk
          • Moving Reagent Kits & Lots to New Clarity LIMS Server
          • Programatically Importing the Sample Submission Excel File
          • Generating an MS Excel Sample Submission Spreadsheet
          • Assigning Samples to New Workflows
        • Miscellaneous Scripts
          • Illumina LIMS Integration
          • Generating a Hierarchical Sample History
          • Protocol-based Permissions
          • Self-Incremental Counters
          • Generic CSV Parser Template (Python)
          • Renaming Samples to Add an Internal ID
          • Creating Custom Sample Sheets
          • Copying Output UDFs to Submitted Samples
          • Parsing Sequencing Meta-Data into Clarity LIMS
          • Submit to a Compute Cluster via PBS
          • Downloading a File and PDF Image Extraction
        • Resources and References
          • Understanding LIMS ID Prefixes
          • Container States
          • Useful Tools
          • Unsupported Artifact Types
          • Unsupported Process Types
          • Suggested Reading
          • API Training Videos
  • Illumina Preset Protocols
    • IPP v2.10
      • Release Notes
      • Installation and User Configuration
      • Manual Upgrade
    • IPP v2.9
      • Release Notes
      • Installation and User Configuration
    • IPP v2.8
      • Release Notes
      • Installation and User Configuration
      • Manual Upgrade
    • IPP v2.7
      • Release Notes
      • Installation and User Configuration
    • IPP v2.6
      • Release Notes
      • Installation and User Configuration
      • Manual Upgrade
  • Sample Prep
    • QC and Sample Prep
      • DNA Initial QC 5.1.2
      • RNA Initial QC 5.1.2
      • Library Validation QC 5.1.2
  • Library Prep
    • AmpliSeq for Illumina
      • BRCA Panel
        • Library Preparation v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Cancer HotSpot Panel v2
        • Library Preparation v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Childhood Cancer Panel
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Comprehensive Cancer Panel
        • Library Preparation v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Comprehensive Panel v3
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Custom DNA Panel
        • Library Preparation v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Focus Panel
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Immune Repertoire Panel
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Immune Response Panel
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • Myeloid Panel
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
      • TCR beta-SR Panel
        • DNA Library Prep v1.1
        • RNA Library Prep v1.1
      • Transcriptome Human Gene Expression Panel
        • RNA Library Prep v1.1
        • Equalizer v1.1
        • Standard v1.1
    • Library Prep Validation
    • Nextera
      • Nextera Mate Pair v1.0
      • Nextera Rapid Capture Custom Enrichment v2.0
      • Nextera XT v2.0
    • Targeted Enrichment
      • Illumina DNA Prep with Enrichment (S) Tagmentation v1.2
      • Illumina RNA Prep with Enrichment (L) Tagmentation v1.1
    • TruSeq
      • TruSeq ChIP-Seq v1.0
      • TruSeq Custom Amplicon v1.0
      • TruSeq DNA Exome v2.0
      • TruSeq DNA PCR-Free v2.0
      • TruSeq Methyl Capture EPIC v2.0
      • TruSeq Nano DNA v1.0
      • TruSeq RNA Access v2.0
      • TruSeq RNA Exome v1.0
      • TruSeq Small RNA v1.0
      • TruSeq Stranded mRNA v2.0
    • TruSight
      • TruSight Oncology 500 ctDNA v1.1
      • TruSight Oncology 500 HT v1.1
      • TruSight Oncology 500 v1.1
      • TruSight Tumor 170 v2.0
    • Other DNA Protocols
      • Illumina DNA PCR-Free Library Prep Manual v1.1
      • Illumina DNA Prep (M) Tagmentation v1.0
    • Other RNA Protocols
      • Illumina Stranded mRNA Prep Ligation 1.1
      • Illumina Stranded Total RNA Prep Ligation with Ribo-Zero Plus v1.1
  • iLASS & Infinium Arrays
    • iLASS
      • iLASS Infinium Genotyping v1.1
        • iLASS Infinium Batch DNA v1.1
        • iLASS Infinium Genotyping Assay v1.1
        • iLASS Infinium Genotyping with PGx Assay v1.1
      • iLASS Infinium Genotyping v1.0
        • iLASS Infinium Genotyping Assay v1.0
        • iLASS Infinium Genotyping with PGx Assay v1.0
    • Infinium Arrays
      • Infinium HD Methylation Assay Manual v1.2
      • Infinium HTS Assay Manual v1.2
      • Infinium LCG Assay Manual v1.2
      • Infinium XT Assay Manual v1.2
      • GenomeStudio v1.0
  • Applications
    • IGA
      • IGA v2.1
        • IGA Library Prep Automated v2.1
        • IGA NovaSeq Sequencing v2.1
    • Viral Pathogen Protocols
      • CDC COVID-19 RT-PCR
        • Sort Specimens to Extraction v1.1
        • Qiagen QIAamp DSP Viral RNA Mini Kit v1.1
        • Qiagen EZ1 Advanced XL v1.1
        • Roche MagNA Pure LC v1.1
        • Roche MagNA Pure Compact v1.1
        • Roche MagNA Pure 96 v1.1
        • bioMerieux NucliSENS easyMAG Instrument v1.1
        • bioMerieux EMAG Instrument v1.1
        • Real-Time RT-PCR Prep v1.1
      • Illumina COVIDSeq v1.6
      • Respiratory Virus Panel v1.0
  • Instruments & Integrations
    • Compatibility
    • Integration Properties
      • Integration Properties Details
    • Clarity LIMS Product Analytics
      • Supported Workflows
      • Workflow Customization
      • Clarity LIMS Product Analytics v1.4.0
        • Configuration
      • Clarity LIMS Product Analytics v1.3.1
        • Configuration
      • Clarity LIMS Product Analytics v1.3.0
        • Configuration
      • Clarity LIMS Product Analytics v1.2.0
        • Configuration
    • Illumina Run Manager
      • Illumina Run Manager v1.0.0
        • Installation and User Interaction
    • iScan
      • iScan System
      • iScan v1.2.0
        • Release Notes
        • BeadChip Accessioning, Imaging, and Analysis
      • iScan v1.1.0
        • Release Notes
        • BeadChip Accessioning, Imaging, and Analysis
      • iScan System v1.0
    • iSeq 100 Run Setup v1.0
    • MiniSeq v1.0
    • MiSeq
      • MiSeq v8.3.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • MiSeq v8.2.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
    • MiSeq i100 (On-Prem)
      • MiSeq i100 On-Prem v1.0.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • MiSeq i100 (Hosted)
      • MiSeq i100 v1.0.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • MiSeqDx
      • MiSeqDx Sample Sheet Generation (v1.11.0 and later)
      • MiSeqDx v1.11.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • MiSeqDx v1.10.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Sample Sheet Generation
        • Manual Upgrade
    • Next Generation Sequencing Package
      • Release Notes
        • NGS Extensions v5.25.0
        • NGS Extensions v5.24.0
        • NGS Extensions v5.23.0
      • Accession Kit Lots
      • Auto-Placement of Reagent Indexes
      • Compute Replicate Average
      • Copy UDFs
      • Initialize Artifact UDFs
      • Label Non-Labeled Outputs
      • Linear Regression Calculation
      • Normalization Buffer Volumes
      • Process Summary Report
      • Routing Script
      • Set UDF
      • Validate Complete Plate
      • Validate Sample Count
      • Validate Unique Indexes
    • NextSeq 500/550
      • NextSeq 500/550 v2.5.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
      • NextSeq 500/550 v2.4.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • NextSeq 500/550 v2.3.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NextSeq 1000/2000 (Hosted)
      • NextSeq 1000/2000 v2.5.1
        • Release Notes
      • NextSeq 1000/2000 v2.5.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
      • NextSeq 1000/2000 v2.4.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NextSeq 1000/2000 (On-Prem)
      • NextSeq 1000/2000 On-Prem v1.0.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NovaSeq 6000 (API-based)
      • NovaSeq 6000 API-based v3.7.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • NovaSeq 6000 API-based v3.6.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
    • NovaSeq 6000 (File-based)
      • NovaSeq 6000 File-based v2.6.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • NovaSeq 6000 File-based v2.5.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NovaSeq 6000Dx (API-based)
      • NovaSeq 6000Dx API-based v1.3.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
      • NovaSeq 6000Dx API-based v1.2.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NovaSeq X Series (Hosted)
      • NovaSeq X Series v1.3.0
        • Release Notes
        • Configuration
        • Manual Upgrade
      • NovaSeq X Series v1.2.1
        • Release Notes
      • NovaSeq X Series v1.2.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
        • Manual Upgrade
      • NovaSeq X Series v1.1.0
        • Release Notes
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • NovaSeq X Series (On-Prem)
      • NovaSeq X Series On-Prem v1.0.0
        • Release Notes
        • Installation
        • Configuration
        • User Interaction, Validation and Troubleshooting
    • References
      • Configure Multiple Identical netPathPrefixSearch Values
      • Configure Support for Samples Having Duplicate Names with Different Indexes
      • Illumina Instrument Sample Sheets
      • Terminology
  • Integration Toolkits
    • Lab Instrument Toolkit
      • Template File Generator
        • Creating Template Files
        • Template File Contents
        • Template File Generator Troubleshooting
      • Add Blank Lines
      • Convert CSV to Excel
      • Parse CSV
      • Name Matching XML Parser
      • Sample Placement Helper
    • Lab Logic Toolkit
      • Working with Lab Logic Toolkit
        • Data Collection Entities
        • Failing a Script
        • Mapping Field Types
        • Non-UDF/Custom Field Properties
        • Setting QC Flags
        • Setting Next Actions
        • Specifying Custom Fields
        • Working with Submitted Samples
        • Working with Containers
      • Lab Logic Toolkit Script Examples
        • Comparing Stop/Start Dates and Times with LLTK
      • Lab Logic Toolkit FAQ
  • Known Issues
    • Integration
      • Sample Sheet Generation Issue and CLPA Issues When Samples Have Been Assigned QC Flag Prior to Entering Steps
  • Security Bulletin
    • Investigation of OpenSSH vulnerability with Clarity LIMS
  • Resources
    • Third Party Software Information
  • Others
    • Revision History
Powered by GitBook
On this page
  • Metadata
  • Tokens
  • Input and Output Tokens
  • Process Tokens
  • Submitted Sample Tokens
  • Other Tokens
  • Special Characters

Was this helpful?

Export as PDF
  1. Integration Toolkits
  2. Lab Instrument Toolkit
  3. Template File Generator

Template File Contents

PreviousCreating Template FilesNextTemplate File Generator Troubleshooting

Last updated 4 months ago

Was this helpful?

This article describes the metadata, tokens, and special characters that you can include in your custom template files for use with the .

Available from: BaseSpace Clarity LIMS v5.1.x

Metadata

The following table lists and describes the metadata elements that you can include in your template files.

  • Unless otherwise specified, metadata elements are optional. In some cases, a metadata element must be used in conjunction with another element. For example, ILLEGAL.CHARACTERS must be used with ILLEGAL.CHARACTER.REPLACEMENTS.

  • Unless otherwise specified, metadata elements can appear multiple times in the template. However, if they are paired with values, only the first occurrence is used. The other lines are silently ignored.

  • Unless otherwise specified, if a metadata element requires a single value, any additional values are ignored when the file is generated. For example, suppose you include the OUTPUT.TARGET.DIR <path> metadata in your template file and provide more than one value for <path>. The script will process only the first (valid) path value and will ignore all other values.

  • Unless "metadata syntax must match exactly" is specified, metadata elements are detected and used even if there is text appended before or after them. For example the following expressions are equivalent:

    LIST.SEPARATOR, COMMA
    This-expression-is-equivalent-LIST.SEPARATOR-to-the-one-above, COMMA

For more information on metadata and how to use metadata elements in your template files, see in article.

Metadata Element

Description

Examples

CONTROL.SAMPLE.DEFAULT.PROJECT.NAME, <project name>

Defines a project name for control samples. The value specified is used to determine the SAMPLE.PROJECT.NAME and SAMPLE.PROJECT.NAME.ALL token values.

  • If not specified, the default project name for control samples is left empty.

  • If no project name follows the metadata element, the project name is left empty.

EXCLUDE.CONTROL.TYPES, <control-type name>, <control-type name>, ...

Excludes control inputs that have a control-type uri matching an entry from the exclusion list.

  • The metadata entry must be followed by one or more control-type name, otherwise file generation is aborted.

  • Each control-type name must exist in the LIMS and the metadata syntax must match exactly. If this is not the case, file generation continues, but a warning message displays and a warning is logged in the log file.

  • A warning is issued if the metadata element is included more than once.

EXCLUDE.CONTROL.TYPES.ALL

Excludes all control types. Takes precedence over EXCLUDE.CONTROL.TYPES

  • The metadata syntax must match exactly.

  • If this metadata element is included more than once, file generation completes, but a warning message is logged in the log file.

EXCLUDE.INPUT.ANALYTES

Excludes inputs of type Analyte (derived sample) from the generated file. If used without the INCLUDE.INPUT.RESULTFILES element, the generated files will be empty. File generation finishes with a warning message and a warning is logged in the log file.

EXCLUDE.OUTPUT.ANALYTES

Excludes outputs of type Analyte (derived sample) from the generated file. The generated file(s) will be empty if:

  • This element is used without the INCLUDE.OUTPUT.RESULTFILES element.

  • There is no per input or shared result file output analyte in the step (or container if a GROUP.FILES.BY is enabled).

In both scenarios, file generation finishes with a warning message and a warning is logged in the log file.

GROUP.FILES.BY.<grouping>, <zip file name> The following groupings are supported:

  • GROUP.FILES.BY.INPUT.CONTAINERS - generates one file per input container

  • GROUP.FILES.BY.OUTPUT.CONTAINERS- generates one file per output container

  • Collisions between the file names. (See OUTPUT.FILE.NAME)

  • Attempting to group files by both input and output container in the same template.

HIDE, <token>, <token>, ... IF <case> The following case is supported:

  • NODATA: Line/Column is removed if the token has no value.

Removes lines from the HEADER_BLOCK section and columns from the HEADER and DATA sections when a <token> matches the <case>.

  • All HIDE lines in the template are treated.

  • There can be one or more <token> on a HIDE line.

  • If there is no <token> between HIDE and IF, file generation is aborted.

ILLEGAL.CHARACTERS, <character>, <character>, ... ILLEGAL.CHARACTER.REPLACEMENTS, <replacement>, <replacement>, ... <character> supports Special Character Mapping

Specifies characters that must not appear in the generated file, and replaces them.

  • Each <character> is replaced by the matching <replacement> element.

  • If ILLEGAL.CHARACTERS or ILLEGAL.CHARACTER.REPLACEMENTS is missing, file generation completes with a warning message and a warning is logged in the log file. No character replacement is performed.

  • The lists following ILLEGAL.CHARACTERS and ILLEGAL.CHARACTER.REPLACEMENTS must match 1-to-1 in order. Otherwise, file generation completes with a warning message and a warning is logged in the log file. No character replacement is performed.

INCLUDE.INPUT.RESULTFILES

Includes inputs of type ResultFile in the generated file. (By default these are excluded.)

ℹ In LIMS v5 and later, ResultFile inputs are only supported in the API.

INCLUDE.OUTPUT.RESULTFILES

Includes outputs of type ResultFile in the generated file. (By default these are excluded.)

LIST.SEPARATOR, <separator> <separator> supports Special Character Mapping

Specifies character(s) used to separate elements for tokens that return lists (e.g., SAMPLE.PROJECT.NAME.ALL).

  • If this metadata is not specified, COMMA is used by default.

  • Must be followed by the separator character(s) to be used. If no separator is specified, file generation is aborted.

OUTPUT.FILE.NAME, <file name>

Specifies the name for the generated file(s).

  • The metadata syntax must match exactly.

  • A subset of tokens is supported in the file name.

  • If the metadata element is not followed by the file name, file generation is aborted.

    • This causes collisions when multiple files are generated. In this case, file generation completes with an exception message and a warning is logged in the log file.

  • Using a path in the file name is deprecated, but still supported (update to OUTPUT.TARGET.DIR). File generation finishes with a warning message and a warning is logged in the log file.

  • Characters in the file name must be either alpha-numeric, underscores, dashes or periods.

    • Illegal characters are replaced (see OUTPUT.FILE.NAME.ILLEGAL.CHARACTER.REPLACEMENT)

    • File generation finishes with a warning message and a warning is logged in the log file.

OUTPUT.FILE.NAME.ILLEGAL.CHARACTER.REPLACEMENT, <replacement>

Specifies the character(s) to use when replacing illegal characters in file names.

  • Legal characters are either alpha-numeric or an underscore, dash or period.

  • If this metadata element is not present, an underscore is used instead. (File generation finishes with a warning message and a warning is logged in the log file.)

  • If the replacement character is an illegal character itself, an underscore is used.

  • Must be followed by the character(s) to use for replacing illegal characters in file names. Otherwise, file generation is aborted.

OUTPUT.SEPARATOR, <separator> <separator> supports Special Character Mapping

Specifies the character(s) used to separate columns in the output.

  • If this metadata is not specified, COMMA is used by default.

  • Must be followed by the separator character(s) to be used. If no separator is specified, file generation is aborted.

OUTPUT.TARGET.DIR, <path>

Specifies the name for the generated file(s).

  • The metadata syntax must match exactly.

  • A subset of tokens is supported in the file name.

  • If the metadata element is not followed by a path, file generation is aborted.

  • If this metadata element is not provided or is incomplete, the value of OUTPUT.FILE.NAME is used instead.

  • If OUTPUT.FILE.NAME contains a path, it is replaced by <path>.

PROCESS.POOLED.ARTIFACTS

Includes pools in the generated file as if they were regular input artifacts.

  • When this metadata element is present, it uses demultiplexing logic and prints one row per sample in the pool.

  • In the case of a submitted pool, sample names are generated following this pattern: “<pool-name>-<reagent-id>”

  • If an input is not pooled in this mode, the input is treated as a pool of one sample.

  • If the input is a pool of pools with only one sample, then it prints out as a pooled artifact instead of an input prior to the pool.

  • If this metadata element is not included, if the input is a pool, it is treated as if it were a single sample and only one row is output in the file.

SCRIPT.VERSION, <major>.<minor>.<patch>

Provides the version of the compatible DriverFileGenerator.jar file.

  • Version compatibility is only checked if this metadata is present.

  • <major>.<minor>.<patch> must all be present. Otherwise, file generation is aborted.

  • File generation is aborted if SCRIPT.VERSION <major> does not match the Template File Generator version.

  • The file generation continues with a warning if the SCRIPT.VERSION is later than the Template File Generator version (<minor> and <patch> only).

⚠ Ensure your NGS version is up to date.

SORT.BY.${token}, ${token}, ...

Sorts the <DATA> rows based on the ${token} specified.

  • There is no reverse order.

  • If the SORT.BY. metadata element is not followed by ${token}, it is silently ignored.

SORT.VERTICAL

Sorts the <DATA> rows based on container column placement.

  • SORT.BY.${INPUT.CONTAINER.ROW}${INPUT.CONTAINER.COLUMN} must also be present in the template. Otherwise SORT.VERTICAL will have no effect and is silently ignored.

Tokens

A token is a placeholder variable that is replaced with unique data at run time. You can include tokens in automation command lines, in scripts, and in template files.

For example, suppose you include the INPUT.CONTAINER.NAME token in a template file generated by a step. At run time, this token is replaced with the name of the container that was input to the step.

All tokens included in a template file must appear in the following form: ${TOKEN}, for example - ${INPUT.CONTAINER.NAME}.

Input and Output Tokens

  • INCLUDE.INPUT.RESULTFILES

  • INCLUDE.OUTPUT.RESULTFILES

Token

Description

INPUT.LIMSID OUTPUT.LIMSID

The LIMS ID of a given input / output

INPUT.NAME OUTPUT.NAME

The name of a given input / output

INPUT.CONTAINER.COLUMN OUTPUT.CONTAINER.COLUMN

The column part of the placement of a given input / output in its container

INPUT.CONTAINER.LIMSID OUTPUT.CONTAINER.LIMSID

The LIMS ID of the container of a given input / output Also supported in:

  • HEADER_BLOCK

  • file name

INPUT.CONTAINER.NAME OUTPUT.CONTAINER.NAME

The name of the container of a given input / output Also supported in:

  • HEADER_BLOCK

  • file name

INPUT.CONTAINER.PLACEMENT OUTPUT.CONTAINER.PLACEMENT

The placement of a given input / output in its container. Format defined in the <PLACEMENT> segment

INPUT.CONTAINER.ROW OUTPUT.CONTAINER.ROW

The row part of the placement of a given input / output in its container

INPUT.CONTAINER.TYPE OUTPUT.CONTAINER.TYPE

The type of container holding a given input / output Also supported in:

  • HEADER_BLOCK

  • file name

INPUT.CONTAINER.UDF.<udf name> OUTPUT.CONTAINER.UDF.<udf name>

Get the value of a UDF on the container of a given input / output

INPUT.REAGENT.CATEGORY OUTPUT.REAGENT.CATEGORY

List of categories of reagent on a given input / output

INPUT.REAGENT.NAME OUTPUT.REAGENT.NAME

List of reagents on an input / output

INPUT.REAGENT.SEQUENCE OUTPUT.REAGENT.SEQUENCE

List the sequence of each category of reagent on a given input / output

INPUT.UDF.<udf name> OUTPUT.UDF.<udf name>

Get the value of a UDF on a given input / output

INPUT.POOL.NAME

If the current input is a pool, provides its name. Empty if the input is not a pool

INPUT.POOL.PLACEMENT

If the current input is a pool, provides its placement (not affected by the <PLACEMENT> section) Empty if the input is not a pool.

INPUT.POOL.UDF.<udf name>

If the current input is a pool, provides one of its UDFs. Empty if the input is not a pool.

Process Tokens

Token

Description

PROCESS.LIMSID

The LIMS ID of the current process Also supported in:

  • HEADER_BLOCK

  • file name

PROCESS.NAME

The name of the current process

PROCESS.UDF.<udf name>

Get the value of a process UDF (on the current step) Also supported in:

  • HEADER_BLOCK

  • file name

The first and last name of the technician running the current process. Also supported in:

  • HEADER_BLOCK

  • file name

Submitted Sample Tokens

Token

Description

SAMPLE.LIMSID

List of all submitted sample LIMS IDs of a given artifact

SAMPLE.NAME

List of all submitted sample names of a given artifact

SAMPLE.UDF.<udf name>

Get the value of a UDF for the project containing the submitted samples of a given artifact

SAMPLE.UDT.<udt name>.<udf name>

Get the value of a UDT UDF on the submitted samples of a given artifact

SAMPLE.PROJECT.CONTACT

List of the project contacts for all submitted samples of a given artifact

SAMPLE.PROJECT.CONTACT.ALL

List of the project contacts for the submitted samples of all artifacts. Prints all unique project contact names in a line (first name followed by last name) separated by LIST.SEPARATOR. Also supported in:

  • HEADER_BLOCK

  • file name

SAMPLE.PROJECT.LIMSID

List of the project LIMS IDs for all submitted samples of a given artifact

SAMPLE.PROJECT.NAME

List of projects for all submitted samples of a given artifact (uses CONTROL.SAMPLE.DEFAULT.PROJECT.NAME)

SAMPLE.PROJECT.NAME.ALL

List of projects for the submitted samples of all artifacts (uses CONTROL.SAMPLE.DEFAULT.PROJECT.NAME). Prints all unique project names in a line, separated by LIST.SEPARATOR. Also supported in:

  • HEADER_BLOCK

  • file name

SAMPLE.PROJECT.UDF.<udf name>

Get the value of a UDF for the project containing the submitted samples of a given artifact. Example:

Other Tokens

Token

Description

DATE

Current date (i.e., when the script is run). The default format uses the host's locale setting.

INDEX

Row number of the data row (in <DATA> segment), starting from 1

Special Characters

CSV and template file generation special characters have substitution symbols within templates.

Substitution Symbol

Character Represented

ASTERISK

*

BACKSLASH

\

CARET

^

CLOSING_BRACE

}

CLOSING_BRACKET

]

CLOSING_PARENTHESIS

)

COMMA

,

DOLLAR_SIGN

$

DOUBLE_QUOTE

"

OPENING_BRACE

{

OPENING_BRACKET

[

OPENING_PARENTHESIS

(

PERIOD

.

PIPE

|

PLUS_SIGN

+

QUESTION_MARK

?

SINGLE_QUOTE

'

TAB

tab

See also .

Creates a file for each instance of the specified grouping, i.e., one file per input or per output container. The script gathers all files together into one zip file so only one file placeholder is needed. The metadata may be followed by the name of the zip file that will contain the grouped files. Otherwise, the value set by the script parameter is used for the file name. The following scenarios will abort file generation:

See also .

All tokens must be part of the list. Unsupported tokens will abort the file generation. Otherwise, file generation is aborted.

Refer to section below to determine if the illegal character must be specified using a keyword.

Refer to the section below to determine if the character must be provided using a keyword.

If this metadata element is not provided or is incomplete, the value of the script parameter is used instead.

You can include 'grouping' tokens in the file name, which allows you to create unique file names when generating multiple files. For details and a list of supported tokens, see .

Refer to the section below to determine if the separator must be provided using a keyword

See also .

See .

See .

For steps with ResultFile inputs or outputs, refer to the following entries in the table:

PROCESS.TECHNICIAN See Upgrade Note in article.

CONTROL.SAMPLE.DEFAULT.PROJECT.NAME, My Control Sample Project
EXCLUDE.CONTROL.TYPES, PhiX v3,
EXCLUDE.CONTROL.TYPES, PhiX v3, Endogenous Positive Control
GROUP.FILES.BY.INPUT.CONTAINERS, MyInputContainerFile
GROUP.FILES.BY.OUTPUT.CONTAINERs, MyOutputContainerFile
ILLEGAL.CHARACTERS,PERIOD
ILLEGAL.CHARACTER.REPLACEMENTS,_
ILLEGAL.CHARACTERS,TAB,PERIOD,#,<,>
ILLEGAL.CHARACTER.REPLACEMENTS,,,_,","
LIST.SEPARATOR,"; "
OUTPUT.FILE.NAME,NewTemplateFileName.csv
OUTPUT.SEPARATOR,TAB
OUTPUT.TARGET.DIR,/Users/LabTech/TemplateFiles/
SCRIPT.VERSION,1.0.2
SORT.BY.${INPUT.CONTAINER.ROW}${INPUT.CONTAINER.COLUMN}
${SAMPLE.PROJECT.UDF.My Project Level Field}
Metadata
Creating Template Files
supported tokens
Special Characters
Special Characters
Special Characters
Template File Generator
Creating Template Files
Metadata
Defining a project name for control samples
-destLIMSID
Generating Multiple Files
-o
Using Token Values in File Names
Using Token Values in File Names
Sorting Logic
Sorting Logic