Job class

An instance of the Job class represents a job (file or job folder) waiting to be processed in one of the input folders for the flow element associated with the script (see working with job folders and internal job tickets for background information). Job objects can be obtained through functions of the Switch class.

The second argument passed to the jobArrived entry point is the newly arrived job that triggered the entry point's invocation. This is commonly called the current job. By convention, the name for the object representing the current job is "job" (although the script developer can choose to use another name).

Processing a job

Processing a job in a script usually consists of the following steps:


If the incoming job is passed along without change the Job.sendTo() functions can be called directly on the incoming job path, skipping all intermediate steps.

Based on the above scenario, Switch automatically handles all complexities:


A job remains in the input folder until one of the Job.sendTo() or Job.fail() functions has been called for the job. The jobArrived entry point will be invoked only once for each job, so if the entry point does not call a sendTo() or fail() function for the job, the script should do so at a later time (in a timerFired entry point or in a subsequent invocation of the jobArrived entry point for a different job).

Note:

After Switch server quits and restarts, the jobArrived entry point will be invoked for all jobs in the input folder once again.

Getting job file/folder information

getPath( ) : String

Returns the absolute file or folder path for the job as it resides in the input folder, including unique filename prefix.

getUniqueNamePrefix( ) : String

Returns the unique filename prefix used for the job, without the underscores. For example, for a job called "_0G63D_myjob.txt" this function would return "0G63D".

getName( ) : String

Returns the file or folder name for the job, including filename extension if present, but excluding the unique filename prefix.

getNameProper( ) : String

Returns the file or folder name for the job excluding filename extension and excluding the unique filename prefix.

getExtension( ) : String

Returns the job's filename extension, or the empty string if there is none.

getMacType( ) : String

Returns the job's Mac file type code as a 4-character string if available, otherwise the empty string.

getMacCreator( ) : String

Returns the job's Mac creator code as a 4-character string if available, otherwise the empty string.

isType( ext : String ) : Boolean

Returns true if the job matches the specified file type, specified as a filename extension, and false otherwise.

A file matches if its filename extension and/or its Mac file type (after conversion) match the specified filename extension. A folder matches if any of the files at the topmost level inside the folder match the specified type. These semantics are similar to those of matching cross-platform file types in filter connections.

isFile( ) : Boolean

Returns true if the job is a single file, false otherwise.

isFolder( ) : Boolean

Returns true if the job is a folder, false otherwise.

getFileCount( ) : Number

Returns the number of files in the job. If it is a single file, the function returns 1. If it is a job folder, the function returns the number of files in the job folder and any subfolders, recursively (folders and subfolders themselves do not contribute to the count).

getByteCount( ) : Number

Returns the size in bytes of the job. If it is a job folder, all files in subfolders are included in the count, recursively.

Getting temporary output paths

createPathWithName( name : String, createFolder : Boolean ) : String

Returns an absolute path to a writable temporary location in the file system with the specified filename (which should include the filename extension if one is required).

If the optional createFolder argument is true, the function creates a new folder at the location indicated by the returned path. The caller can use this folder as the root of an output job folder or just as a location to store temporary files, one of which may become an output file.

If the optional createFolder argument is false or missing, the function does not create a file or folder at the location indicated by the returned path (but the parent folder is guaranteed to exist). The caller can use the path to create an output file or a temporary file.

The returned path is guaranteed to differ between invocations of the function, even for the same job and with identical argument values.

Also, after the entry point returns, Switch deletes any file or folder left at any of the paths created using this function.

createPathWithExtension( ext : String, createFolder : Boolean ) : String

Same as above but uses the filename of the job after substituting the specified extension (rather than replacing the complete filename). If the specified extension is the empty string, any trailing dot is removed from the filename; this helps creating a folder path without extension.

Sending jobs to outgoing connections

The semantics for sending files to outgoing connections depend on the connection type. However all outgoing connections of a flow element (and thus a script) have the same type, and this type is defined in the script declaration.

To successfully complete processing of a job, the script must call exactly one sendTo() function for each generated output file/folder (i.e. there may be multiple sendTo() calls for the same job). If there is no output, the script must call the sendToNull() function. It is allowed to defer these calls to a subsequent entry point invocation (for example, when grouping multiple incoming jobs in a job folder).

If a fatal error occurs, the script must call one of the fail() functions as appropriate. Calling a fail() function cancels any and all previous sendTo() function calls for the same job during the same entry point invocation.

Once a fail() function has been called for a job, any further calls to fail() or sendTo() functions for the same job during the same entry point invocation are ignored (such calls just log a warning message rather than performing the requested operation).

Guidance to generating output jobs

Use Case

Guidance on generating output jobs

One to many

To deliver multiple output jobs triggered by or associated with an incoming job, repeatedly call the sendTo() function on the input job rather than using the createNewJob() function; this ensures that all output jobs correctly inherit the job ticket of the originating job

This guideline holds even if the output job is a totally different file; for example, a preflight report for a PDF file or a file selected from a database depending on the contents of an incoming xml file

Many to one

Leave jobs in the input folder (by not calling any sendTo() functions) until you have everything to generate a complete output job; then call sendTo() on the "primary" input job and sendToNull() on all other input jobs related to this output job – all in a single entry point invocation

The output job inherits only the job ticket of the primary input job; if this is unacceptable you'll have to merge metadata from the other input jobs in a meaningful way (similar to the built-in job assembler)

Many to many

In most situations this is simply a combination of the previous two use cases

However if it is not possible to generate all output jobs during the same entry point invocation, you need to call setAutoComplete(false) on the input job so that it is not automatically removed from the input folder at the end of the entry point invocation; this is an extremely rare situation so in most cases you don't need to worry about preventing auto-completion

None to any

You need the createNewJob() function only when there really is no originating job, for example when you eject a new output file based on a timer or an external event that is not associated with a job already in the flow

Any to none

Whenever you're ready with an input job, call sendToNull() on it

SendTo functions

In their "path" argument the sendTo() functions expect to be passed the absolute path of the file or folder that should be sent along. This could be any path:


In their optional "name" argument the sendTo() functions expect to be passed the filename for the output file or job folder (including filename extension; a path portion and/or a unique name prefix are ignored). If the "name" argument is null or missing, the filename in the "path" argument is used instead.

The sendTo() functions automatically insert or replace the unique name prefix as appropriate, and they move or copy files as needed.

After the entry point returns, Switch removes all jobs for which one or more sendTo() functions were called from the input folder, and it deletes any file or folder left at any of the paths passed to any of the sendTo() functions. Thus a script should never call sendTo() on files that must be preserved. For example, a script that injects a file from an asset management system into a flow should copy the file to a temporary location and then call sendTo() on the copy.

sendToNull( path : String )

Marks the job as completed without generating any output. The path is ignored other than for marking the indicated file/folder for deletion.

sendToSingle( path : String, name : String )

Sends a file/folder to the single outgoing move connection. If the flow element has outgoing connection(s) of another type or if it has more than one move connection this function logs an error and does nothing. If the flow element has no outgoing connections, fail() is invoked instead with an appropriate error message.

sendToData( level: Number, path : String, name : String )

Sends a file/folder to any and all outgoing traffic-light data connections that have the specified level property enabled (success = 1, warning = 2, error = 3). If the flow element has outgoing connection(s) of another type this function logs an error and does nothing. If the flow element has no outgoing data connections of the specified level, fail() is invoked instead with an appropriate error message.

sendToLog( level: Number, path : String, name : String, model : String )

Sends a file/folder to any and all outgoing traffic-light log connections that havethe specified level property enabled (success = 1, warning = 2, error = 3). If the flow element has outgoing connection(s) of another type this function logs an error and does nothing. If the flow element has no outgoing log connections of the specified level, nothing happens (i.e. the log file is discarded).

The model argument is used only in case the log file is sent over a "Data and log" connection as a metadata dataset attached to the job. In that case the value of model determines the data model of the dataset ("XML", "JDF", "XMP" or "Opaque"). If the argument is nil or missing or if it has an unsupported value, the data model is set to "Opaque".

sendToFilter( path : String, name : String )

Sends a file/folder to all outgoing connections with a file filter that matches the file/folder being sent. If there is no such connection, fail() is invoked instead with an appropriate error message.

The flow element must have outgoing filter connections without folder filter properties, otherwise this function logs an error and does nothing.

sendToFolderFilter( foldernames : String[ ], path : String, name : String )

Similar to sendToFilter() but also honors the folder filter properties on the outgoing connections, using the list of folder names passed in the first argument.

The flow element must have outgoing filter connections with folder filter properties, otherwise this function logs an error and does nothing.

sendTo( c : Connection, path : String, name : String )

Sends a file/folder to the specified outgoing connection, regardless of connection type.

Fail functions

See the description of the Environment.log() function for more information on the message and extra arguments.

A script should invoke failProcess() rather than fail() or failRetry() when it encounters an error condition that does not depend on the job being processed but rather on the process itself (for example, a network resource is not available). In such a case all subsequent jobs would fail as well (because the process is broken) and moving them all to the problem jobs folder makes no sense. It is much more meaningful to hold the jobs in front of the broken process and retry the process from time to time. See viewing problem status for more information on problem jobs and processes.

fail( message : String, extra : String or Number )

Logs a fatal error for the job with the specified message and moves the job to the problem jobs folder.

failAndRetry( message : String, extra : String or Number )

Similar to fail() but requests that Switch retries processing the job if this was the first attempt at processing it. This is useful if an external application or resource produced an error that may go away when retrying the job a second time. Switch keeps track of the retry count, so the script doesn't need to worry about it. If the job fails a second time it will be moved to the problem jobs folder.

If the script has the persistent execution mode, Switch will invoke the finalizeProcessing and initializeProcessing entry points before trying again, so that the external resource is re-initialized.

failProcess( message : String, extra : String or Number )

Logs a fatal error for the job with the specified message and puts the flow element instance in the "problem process" state. The job is not affected (i.e. it stays in the input folder) and a new jobArrived event will be generated for the job when the process is retried.

Logging

Switch automatically issues the appropriate log messages when a script invokes the one of the Job.sendTo() or Job.fail() functions. A script can call the function described below to log additional job-related messages.

log( type : Number, message : String, extra : String or Number )

Logs a message of the specified type for this job, automatically including the appropriate job information.

See the description of the Environment.log() function for more information on the arguments.

Getting job ticket info

See using hierarchy info, using email info, viewing job state statistics, and configuring users for background information.

getHierarchyPath( ) : String[ ]

Returns an Array with the location path segments in the hierarchy info associated with the job, or an empty array if there is no hierarchy info. The topmost path segment is stored at index 0.

getEmailAddresses( ) : String[ ]

Returns an Array with the email addresses in the email info associated with the job, or an empty array if there are none.

getEmailBody( ) : String

Returns the email body text in the email info associated with the job, or the empty string if there is none.

getJobState( ) : String

Returns the job state currently set for the job, or the empty string if the job state was never set.

getUserName() : String

Returns the short user name for the job, or the empty string if no user information has been associated with this job.

getPrivateData ( tag : String ) : String

Returns the value of the private data with the specified tag, or the empty string if no private data with that tag was set for the job.

getPrivateDataTags( ) : String[ ]

Returns a list of all tags for which non-empty private data was set for the job.

getPriority( ) : Number

Returns the job priority for this job as a signed integer number; see job priorities.

getArrivalStamp( ) : Number

Returns the arrival stamp for this job as a signed integer number; see arrival stamps.

Updating job ticket info

See using hierarchy info, using email info, viewing job state statistics, and configuring users for background information.

The update functions do not affect jobs that have already been sent (by calling one of the Job.sendTo() functions). They do affect any jobs that will be sent after calling the update function.

setHierarchyPath( segments : String[ ] )

Replaces the location path in the hierarchy info associated with the job to the list of segments in the specified Array. The topmost path segment is stored at index 0.

addBottomHierarchySegment( segment : String )

Adds the specified segment to the location path in the hierarchy info associated with the job, at the end of the list (i.e. at the bottom).

addTopHierarchySegment( segment : String )

Adds the specified segment to the location path in the hierarchy info associated with the job, at the beginning of the list (i.e. at the top).

setEmailAddresses( addresses : String[ ] )

Replaces the email addresses in the email info associated with the job by the list of email addresses in the specified Array.

addEmailAddress( address : String )

Adds the specified email address to the email info associated with the job.

setEmailBody( body : String )

Replaces the email body text in the email info associated with the job by the specified string.

appendEmailBody( body : String )

Appends the specified string to the email body text in the email info associated with the job, inserting a line break after the existing body if any.

setJobState( state : String )

Sets the job state for the job to the specified string.

setUserName( username: String )

Sets the short user name associated with the job. If the specified string does not match the name of an existing user in the user database a warning is logged (but the set operation does succeed).

setPrivateData( tag : String, value : String )

Sets the value of the private data with the specified tag to the specified string. This supports light-weight persistent job information.

setPriority( priority : Number )

Sets the job priority for this job to the specified number, rounded to an integer; see job priorities. Generally speaking, jobs with a higher priority are processed before jobs with a lower priority.

refreshArrivalStamp( )

Refreshes the job's arrival stamp so that it seems that the job just arrived in the flow; see arrival stamps.

Switch uses the arrival stamp to determine the processing order for jobs with equal priority (generally speaking, jobs that arrived in the flow first are handled first). Refreshing the arrival stamp can be meaningful when releasing a job that has been held in some location for a long time, to avoid that the job would be unduly processed before other jobs.

Managing external metadata datasets

The functions described here allow associating external metadata with a job's internal job ticket. The Dataset classes in the metadata module implement each of the supported metadata data models: XML data model, JDF data model, XMP data model, and Opaque data model.

The createDataset() and setDataset() functions must not be used after any of the Job.sendTo() or Job.fail()functions was called for a job.

createDataset ( model : String ) : Dataset

Creates and returns a new external metadata dataset object with the specified data model ("XML", "JDF", "XMP" or "Opaque") and with a backing file path and name appropriate for this job, without creating the actual backing file. Returns null if an unknown data model is requested, or if any of the Job.sendTo() or Job.fail() functions was already called for the job.

The caller is responsible for creating a backing file that conforms to the specified data model before attempting to read any data from the dataset. The backing file path can be retrieved from the returned Dataset object.

It is not possible to create a writable or an embedded dataset with this function.

setDataset ( tag : String, value : Dataset )

Associates a metadata dataset object with the specified tag, for this job. It is allowed to associate a dataset with a job that has been created for another job.

getDataset ( tag : String ) : Dataset

Returns the metadata dataset object associated with the specified tag for this job, or null if there is none.

getDatasetTags( ) : String[ ]

Returns a list of all tags for which a metadata dataset object is associated with the job.

Managing embedded metadata datasets

getEmbeddedDataset ( writable : Boolean ) : Dataset

Returns an embedded metadata dataset object with the XMP data model for the metadata embedded in the job. If there is no supported embedded metadata, the function returns a valid but empty dataset.

The backing file path for the dataset may point to the job itself (if it is an individual file) or to one of the files in the job folder (in some cases). Metadata may be embedded in the file as an XMP packet and/or as binary EXIF or IPTC tags. Metadata fields from multiple sources are synchronized into a unified XMP data model. See supported file formats for more information.

When the getEmbeddedDataset() function is invoked on a job for the first time in a certain entry point, it returns:


If the function is called again on the same job in the same entry point, it returns the embedded dataset object that was created in the first call, ignoring the "writable" argument in the repeat calls.

Note:

A writable dataset keeps its backing file (the job!) open for update. Thus it is necessary to invoke the finishWriting() function on the dataset (which closes the backing file) before attempting to move or process the job in any way.

Managing job families

The job family of a job is the set of jobs that have been directly or indirectly generated from the same original job. See Job families.

isSameFamily( job : Job ) : Boolean

Returns true if the receiving job and the specified job belong to the same job family; otherwise returns false.

startNewFamily( )

Disconnects the receiving job from its current family and makes it start a fresh family. In other words, the job becomes the equivalent of an original job.

joinFamily( job : Job )

Disconnects the receiving job from its current family and makes it join the family of the specified job.

getJobsInFamily( job : Job ) : JobList

Returns a list of Job instances representing the jobs in the job family of the specified job.

The returned Job objects are "read-only" and they support only a limited subset of the functions offered by the Job class. Specifically, these objects support the functions described in the sections Getting job file/folder information and Getting job ticket info, plus the functions isSameFamily() and getJobsInFamily(). Invoking any other function on these objects is not allowed and causes an error.

Furthermore, the information returned by the read-only Job objects is cached in memory at the time the objects are created (i.e. before they are returned). There is no guarantee that the information is still valid, since other processes may be concurrently operating on these jobs.

Accessing the occurrence trail

Information about things that happen to a job (occurrences) is written to the internal job ticket as a job moves along the flow; see job occurrence trail. The following functions allow retrieving occurrences for each job. The Occurrence class allows retrieving the attributes for each occurrence.

getOccurrences( type : String ) : OccurrenceList

Returns a list of all Occurrence instances of the specified type associated with the job, in chronological order (the most recent occurrence is listed last). If the type argument is missing or null, all occurrences associated with the job are returned.

getMostRecentOccurrence( type : String ) : Occurrence

Returns the most recent Occurrence instance of the specified type associated with the job, or null if there is no such instance. If the type argument is missing or null, the most recent occurrence is returned regardless of its type.

Evaluating variables

These functions obtain the value of a single variable in the context of the job (and if needed, in the context of a flow element). The variable must be specified with the usual syntax including the square brackets. Thus by definition the first argument must start with a "[" and end with a "]" and include no white space.

The second argument provides the appropriate Switch class instance for variables that need the flow element context. It can be missing or null for variables that don't need such context.

All of these functions return null if the argument has invalid syntax, if the specified variable is unknown, if an unsupported argument is specified, if the variable needs the flow element context and the Switch argument is null or missing, if an indexed variable is specified without its Index argument, if the variable's text value does not conform to the format for the requested data type, or if the text value is empty.

getVariableAsString( variable : String, s : Switch ) : String

Returns the text representation of the variable, formatted appropriately for the variable's data type and taking into account any formatting and indexing options.

getVariableAsNumber( variable : String, s : Switch ) : Number

Same as getVariableAsString() but interprets the string as a decimal number (with or without a decimal point) or as a rational number (two decimal integer numbers separated by a forward slash). For example, "1.25" and "5/4" represent the same number. The function converts the strings "-INF", "-Infinity", "INF", and "Infinity" to negative and positive infinity, respectively. Numbers outside the range of the scripting language are clipped to negative of positive infinity.

getVariableAsBoolean( variable : String, s : Switch ) : Boolean

Same as getVariableAsString() but interprets the string as a Boolean value. The preferred strings are "True" and "False". If these do not match, a case insensitive comparison is tried, then simply "t" or "f", and finally non-zero and zero integer representations.

getVariableAsDate( variable : String, s : Switch ) : Date

Same as getVariableAsString() but interprets the string as a date-time in the ISO 8601 format. The class of the returned Date object is specific to the scripting environment in use.

Activating fonts

activateFonts( targetPath : String )

Activates the fonts residing in this job folder by copying them to the specified target location. This function does nothing if the receiving job is an individual file or if the job folder doesn't contain any fonts.

The target path specifies the folder in which the fonts will be copied, for example an application specific font folder. If this argument is missing, null or the empty string, the user's default system font location is used instead.

deactivateFonts( )

Undoes the effect of any prior invocations of activateFonts() in this entry point. If necessary this function is called automatically when exiting the entry point, however it is good programming style to call it explicitly.

Automatic Job Completion

The following function is provided to allow generating output jobs for the same input job during multiple subsequent entry point invocations. Normally an input job is removed when exiting the first entry point in which an output job is generated.

setAutoComplete( auto-complete : Boolean )

Sets the auto-completion flag for the job to the specified value.

If the auto-completion flag has a value of "false" when exiting the entry point, the job will stay in the input folder, even if any number of sendTo() functions have been called for the job during the entry point invocation.

If the auto-completion flag has a value of "true" when exiting the entry point, and one or more sendTo() functions have been called for the job during the entry point invocation, the job will be removed from the input folder. This is the default behavior.

At the start of each entry point invocation, the auto-completion flag is initialized to its default value of "true" for all jobs. Consequently the effect of the setAutoComplete() function is restricted to the current entry point invocation. Furthermore, since the auto-completion flag is checked only when exiting the entry point, the order in which the setAutoComplete() and sendTo() functions are called is of no importance.

The auto-completion flag does not affect the behavior of the fail() functions. After calling a fail function for a job, the job will be removed from the input folder regardless of the value of the auto-completion flag.