During the course of the week, the discussion happened regarding the different places where a person might read the DataStage and QualityStage logs in InfoSphere. I hadn’t really thought about it, but here are a few places that come to mind:
IBM InfoSphere DataStage and QualityStage Operations Console
IBM InfoSphere DataStage and QualityStageDirector client
IBM InfoSphere DataStage and QualityStageDesigner client by pressing Ctrl+L
While investigating a recent Infosphere Information Server (IIS), Datastage, Essbase Connect error I found the explanations of the probable causes of the error not to be terribly meaningful. So, now that I have run our error to ground, I thought it might be nice to jot down a quick note of the potential cause of the ‘Client Commands are Currently Not Being Accepted’ error, which I gleaned from the process.
Error Message Id
An error occurred while processing the request on the server. The error information is 1051544 (message on contacting or from application:[<<DateTimeStamp>>]Local////3544/Error(1013204) Client Commands are Currently Not Being Accepted.
Possible Causes of The Error
This Error is a problem with access to the Essbase object or accessing the security within the Essbase Object. This can be a result of multiple issues, such as:
Object doesn’t exist – The Essbase object didn’t exist in the location specified,
Communications – the location is unavailable or cannot be reached,
Path Security – Security gets in the way to access the Essbase object location
Essbase Security – Security within the Essbase object does not support the user or filter being submitted. Also, the Essbase object security may be corrupted or incomplete.
Essbase Object Structure – the Essbase object was not properly structured to support the filter or the Essbase filter is malformed for the current structure.
IBM Knowledge Center, InfoSphere Information Server 11.7.0, Connecting to data sources, Enterprise applications, IBM InfoSphere Information Server Pack for Hyperion Essbase
When you are controlling a chain of sequences in the job stream and taking advantage of reusable (multiple instances) jobs it is useful to be able to pass the Invocation ID from the master controlling sequence and have it passed down and assigned to the job run. This can easily be done with needing to manual enter the values in each of the sequences, by leveraging the DSJobInvocationId variable. For this to work:
The job must have ‘Allow Multiple Instance’ enabled
The Invocation Id must be provided in the Parent sequence must have the Invocation Name entered
The receiving child sequence will have the invocation variable entered
At runtime, a DataStage invocation id instance of the multi-instance job will generate with its own logs.
This approach allows for the reuse of job and the assignment of meaningful instance extension names, which are managed for a single point of entry in the object tree.
The APT_TSortOperator warning happens when there is a conflict in the portioning behavior between stages. Usually, because the successor (down Stream) stage has the ‘Partitioning / Collecting’ and ‘Sorting’ property set in a way that conflicts with predecessor (upstream) stage’s properties, which it is set to preserver. This can occur when the successor stage has the “Preserve Partitioning” property set to:
<<Link Name Where Warning Occurred>>: When checking operator: Operator of type “APT_TSortOperator”: will partition despite the preserve-partitioning flag on the data set on input port 0.
First, if the verify that the partitioning behaviors of both stages are correct
If so, set the predecessor ‘Preserve Partitioning’ property to “Clear”
If not, then correct the partitioning behavior of the stage which is in error
Beware when you see this message when working with Boolean in DataStage, the message displays as informational (at list it did for me) not as a warning or an error. Even though it seems innocuous, what it meant for my job, was the Boolean (‘true’ / ‘false’) was not being interpreted and everything posted to ‘false’.
In DataStage the Netezza ‘Boolean’ field/Data SQL type maps to the ‘Bit’ SQL type, which expects a numeric input of Zero (0) or one (1). So, my solution (once I detected the problem during unit testing) was to put Transformer Stage logic in place to convert the Boolean input to the expected number value.
Netezza to Datastage Data Type Mapping
Netezza data types
data types (SQL types)
Expected Input value
0 or 1 (1 = true, 0 = false)
Transformer Stage logic Boolean Handling Logic
A Netezza Boolean field can store: true values, false values, and null. So, some thought should be given to you desired data outcome for nulls
This first example sets a that the nulls are set to a specific value, which can support a specific business rule for null handling and, also, provide null handling for non-nullable fields. Here we are setting nulls to the numeric value for ‘true’ and all other non-true inputs to ‘false’.
If isnull(Lnk_Src_In.USER_ACTIVE) then 1 Else if Lnk_Src_In.USER_ACTIVE = ‘true’ Then 1 Else 0
These second examples sets a that the nulls are set by the Else value, if your logic direction is correct value and still provides null handling for non-nullable fields.
If Lnk_Src_In.USER_ACTIVE = ‘true’ Then 1 Else 0
If Lnk_Src_In.USER_ACTIVE = ‘False’ Then 0 Else 1
Director Log Message
<<Link Name Where Message Occurred>>: Numeric string expected. Use default value.
Or something like this:
<<Link Name Where Message Occurred>>: Numeric string expected for input column ‘<<Field Name Here>>‘. Use default value.
PureData System for Analytics, PureData System for Analytics 7.2.1, IBM Netezza user-defined functions, UDX data types reference information, Supported data types, Boolean
InfoSphere Information Server, InfoSphere Information Server 11.5.0, Connecting to data sources, Databases, Netezza Performance Server, Netezza connector, Designing jobs by using the Netezza connector, Defining a Netezza connector job, Data type conversions, Data type conversions from Netezza to DataStage
Basically, the Action=8 error, which I normally see when opening the DataStage Director Client application, means that one or more of the RT_LOG files have become corrupted. Usually, this problem occurs in relation to disk space issues; although, there can be other causes.
Error calling subroutine: *DataStage*DSR_PROJECT (Action=8); check DataStage is set up correctly in the project
(Subroutine failed to complete successfully (30107))
The Cleanup approach
The cleanup process really consists of three primary steps:
Free disk space
Restart the application process
And, fix corrupted log
Free Disk Space
This can consist of:
Cleaning ‘/tmp’ Space
Removing any large unnecessary files
Enlarging ‘/tmp’ space allocation
Adding addition disks space, if necessary
Restart Application Processes
Once you have free the disk space available restarting VM/server is recommended, However, if that is not a realistic option, then at least reboot the Infosphere Datastage engine to ensure the newly freed memory is registering with the applications and to ensure everything is restarted and running.
Fix Corrupted logs
Perhaps, the cleanest way reset all logs is to perform a ‘Multiple Job Compile’. Running the jobs will also overwrite the logs, but is a little more hit and miss, if not all the jobs are not in job streams/batches, which can be run at this time. The logs can be manually overwritten by compiling the job or performing a reset. The trick, with manual reset, is that you have to know which job to reset, so, this could take a while to get them all. The logs can be manually, dropped and reset, but I recommend that approach only as a last resort.