Message: Return type of CI_Session_files_driver::close() should either be compatible with SessionHandlerInterface::close(): bool, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice
Message: Return type of CI_Session_files_driver::read($session_id) should either be compatible with SessionHandlerInterface::read(string $id): string|false, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice
Message: Return type of CI_Session_files_driver::write($session_id, $session_data) should either be compatible with SessionHandlerInterface::write(string $id, string $data): bool, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice
Message: Return type of CI_Session_files_driver::destroy($session_id) should either be compatible with SessionHandlerInterface::destroy(string $id): bool, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice
Message: Return type of CI_Session_files_driver::gc($maxlifetime) should either be compatible with SessionHandlerInterface::gc(int $max_lifetime): int|false, or the #[\ReturnTypeWillChange] attribute should be used to temporarily suppress the notice
This blog presents the design of the components that will address Large Document/Transaction Handling for all webMethods interfaces.
The intended audience of this document are the developers who will utilize this solution for construction of an interface point, along with anyone else seeking an in-depth knowledge of how the components are to be implemented.
For the purpose of this document, a document is defined as a single file and/or transmission from a Source or to a Target.
An inbound document containing one to many transactions is received by the Integration Server. This document is then split by transaction (a configurable value set in the service call), written to disk, and a notification is sent to the Broker.
Figure 2 â Process Node
Each Node Notification received triggers the process node service which will process the node to Canonical tracking the count of the specified element list and publishing the a canonical when the specified threshold is met. Once the service is done processing the node, a Large Transaction Notification is published to Broker.
1.1Overview â Target Processing
Figure 3 â Target Processing
Each Canonical is received by the Target service and processed per Target requirements, then written to disk.
Figure 4 â Target Batching
Each Large Transaction Notification is received by the Target package. The batching service verify that all the reported âpartâ of the original message have been processed and will then batch the data to the target system.
Broker: The webMethods Broker is the hub of the system. Its main purpose is to exchange documents between components that are connected to it. The Broker is provided by the webMethods Integration Platform software. It stores the webMethods Documents that are related to the interface point. All webMethods Documents are stored in the client queues of the Broker and then dispatched to the components that subscribe to these documents. For further information, please see the webMethods Integration Platform documentation.
Integration Server: The webMethods Integration Server hosts services that contain the logic of the interface point. It uses a JDBC Adapter to connect to the databases. Subscription is performed through conditional trigger that invokes the appropriate services.
The enterprise document : LXKEnterprise.docs:processNodeNotification is used to transmit information regarding the node written to file to the service that will process that node. Each enterprise document contains a single instance of the relevant data set.
The enterprise document : LXKEnterprise.docs:largeDocNotification is used to transmit information regarding the original node that was processed on the Source side to the service on the Target side that will batch and send the Target data. Each enterprise document contains a single instance of the relevant data set.
The following must be defined in order to utilize the large document/transaction handling:
§Definition of a transaction within the source canonical.
§Maximum size of source transaction (can it exceed current thresh hold?).
§Definition of line item within source transaction.
§Requirements for target transaction (does the transaction need to be processed as a whole on the target side?).
§Mapping for source to target (elements from source that need to be passed to target).
1.1.2Large Document/Transaction Type
There are three different âflavorsâ of handling large documents and transactions. Follow the flow below to determine the appropriate process to implement for your project
1.1.1.1Large Document Handling â Source
This will split the source document by the defined transaction and process each transaction independently to the target.
§Create the source package for your project.
§Invoke the LargeDocHandling.SetUp:InitNewLargeDocHandlingPackage with the following inputs
oPackage â the name of the source package
oInterface Type = âsourceâ
odocumentTypeName = namespace of source canonical
otransactionElement = element that defines a single transaction in the source canonical.
otransactionElementKey = attribute that holds unique id for transaction
oprocessNodeServiceName = the name of the flow service that will be created to process the single source transaction
osplitNodeServiceName = the name of the flow service that will be created to split the source document into single transactions
osplitTransactions = âfalseâ
olineItemElement, lineCount, and transactionHeaderAttributes = null
§Update the template service with steps for processing and publishing source transaction
These webMethods Enterprise Documents represent the Enterprise Data for the appropriate business objects, they are publishable to the Broker where noted
This document is used to publish the notification that all parts of a single transaction have completed processing and been published. This document is only used when splitting transaction. This document is publishable to wM Broker.
This document is used to publish the notification that a single transaction has been written to disk and is ready to be processed. This document is publishable to wM Broker.
This is the template service that will be invoked from trigger. It is the template for splitting, processing and publishing the source transaction and transaction notification. It calls LargeDocHandling.SetUp.templates.source:mapDataNode and LargeDocHandling.SetUp.templates.source:publishCanonical.
This is the template service that will map the transaction header attributes. It is invoked from LargeDocHandling.SetUp.templates.source:processNode_splitTransaction
This is the template service that will publish the split transactions. It is invoked from LargeDocHandling.SetUp.templates.source:processNode_splitTransaction
This is the template service that will subscribe to the split transactions. It will convert the source transactions to the target format and write the records to disk.
This is the template service that will subscribe to the Large Transaction notifications. It will verify that all of the transaction parts have completed processing and call the service
This template trigger receives any LargeDocHandling.docs:largeDocNotification canonicals. The filter must be configured to match the trigger for the target/transaction.
This service splits the provided BizDoc into transactions as defined by the service input variables. For each transaction, the transaction is written to disk (NodeToFile) and a notification is published to the Broker.
This service is invoked by the LargeDocHandling.triggers:subscribeNodeNotification trigger. It will simply invoke the service provided in the LargeDocHandling.docs:processNodeNotification document.
This trigger receives any LargeDocHandling.docs:processNodeNotification canonicals and invokes the LargeDocHandling.triggers:invokeProcessNodeService service.
All error handling will conform to the Lexmark Common Components Design document which details global error handling for both webMethods and JDEdwards.
Risk to memory consumption when receiving BizDoc from Trading Networks.
In the case of a large document, the BizDoc, as received from Trading Networks contains the complete document, this can cause the server to exceed itâs allocated memory
1.If the Trading Networks large document handing is configured correctly the BizDoc will only contain the reference to the document, the complete document will be stored to disk.
2.The BizDoc is dropped from the pipeline as soon as the xmlNode is extracted
2
Risk to memory consumption when batching transactions to the target
If target batching is required, when all of the transactions have been reassembled for transmission to the target system, the complete transaction could exceed the current thresh hold causing the server to exceed itâs allocated memory
Target batching should only be used when there is no other option. The complete transaction is only held in memory during the actual transmission to the target system and dropped as soon as there is a response from the target
3
Available disk space/resources
Transaction volume could cause the physical server to run out of available disk space and/or iNodes.
A script will be created to clear all of the âArchiveâ directories after 24 hours. This script will be scheduled as a cron job which will be added to the server start-up process flow.