Install "IS as a Service" or "IS as a Application" and basic configuration after Installation in SoftwareAG webMethods

Duration: 10 minutes
A package framework consists of a package and a set of folders that serve as containers for the various types of objects used to develop Flow services. In this tutorial you will create a typical package framework consisting of a package and a set of folders within the package

Prerequisites #

Start the Software AG webMethods ESB Integration Server (IS):
  1. If you installed the IS as an Application on Windows or on Linux, go to <Installation Directory (C:\SoftwareAG by default)>\IntegrationServer\bin and double-click or execute the startup.bat file.
  2. If you installed the IS as a Service, you can open the Windows Services app (Control Panel -> System and Security -> Administrative Tools -> Services or enter ‘services’ from the bottom of the Start Menu and click enter. Scroll down, right click on Software AG Integration Server and select Start:
  3. If you installed the IS as an Application and you want to register it as a Windows Service, you can go to <Installation Directory>\SoftwareAG\IntegrationServer\support\win32 and double-click to execute the installSvc.bat file. Now you can start the IS by using the instructions in step 2 above.
  4. To shutdown the IS, either go to <Installation Directory>\SoftwareAG\IntegrationServer\bin and double-click to run the shutdown.bat file or stop it from the Windows Services app. To open the Services app use the instructions in step 3 above.
In addition, a basic knowledge of Software AG Designer is helpful, but not required. Since Software AG Designer is an Eclipse-based IDE, familiarity with Eclipse is recommended.

Step Outline #

You create a package framework using the Software AG Designer IDE to:
  • Connect to the Integration Server
  • Create a package
  • Create a set of folders within the package

Step 1: Connect to the Integration Server #

In this Step: You will connect to the Integration Server by launching the Software AG Designer application and starting a session on the Integration Server.
Important: The Integration Server must be started. See Managing the Evaluation Cloud for information on starting the Integration Server.
To connect to the Integration Server:
  • Start Software AG Designer from the Start Menu -> All Programs -> Software AG -> Tools -> Software AG Designer 9.5.
  • In the Software AG Designer Welcome page, click on the Open the Service Development Perspective link:
In the Package Navigator view, connect to the Default Integration Server, if not already connected automatically:
The Package Navigator view displays all the installed system packages (top-level containers) and folders (sub-containers):
You can now create a new package on the integration server to put your assets you create in this and other tutorials.
Note: If an Integration Server has not been defined, you may do so by selecting Preferences from the Window menu:
The Integration Server connection is configured under the Software AG -> Integration Servers preference pane:
Specify the following to connect to the Integration Server:
FieldValue
Name:Default
Host:The computer name or IP address where the IS is running or localhost
Port:5555
Username:Administrator
Password:manage
Connect immediately:Checked
Connect at startup:Checked
Secure connection:Unchecked
Click the OK button to save any changes and click the OK button to dismiss the Preferences dialog.

Step 2: Create a Package #

In this Step: You will create a new package on the Integration Server.
To create a package:
  • Right-click on the Default Integration Server (not the Default package) in the Package Navigator view and selectNew -> Package:
Designer prompts you for a package name:
  • Enter the package name FLOW_Tutorial and click the Finish button.
Designer adds the new package on the IS and displays it in the Package Navigator view:
Note: You can perform basic operations, such as copying, moving, and renaming objects in the Package Navigator view by dragging-and-dropping, right-clicking to display context menus, or by double-clicking to select.
You can now create folders within the FLOW_Tutorial package.

Step 3: Create Folders #

In this Step: You will create a set of folders within the FLOW_Tutorial package.
To create a folder:
  • Right-click on the FLOW_Tutorial package and select New > Folder:
Designer will prompt you to name the new folder.
Best Practice: Assign the top-level folder the same name as the package. This is a best practice so that there are no fully qualified object name collisions across packages. An object’s fully qualified name starts at the top level folder in its package and includes all of the subfolders to and including the object name. We will point this out throughout the tutorials.
Enter the top-level folder name of FLOW_Tutorial and click the Finish button:
Right-click the parent folder to create a subfolder under the parent folder. Follow the same steps to create the following folder hierarchy in the FLOW_Tutorial package:

Conclusion #

You have created a new IS Package and folders to save services that you develop.

VMware Workstation unrecoverable error: (vthread-13) Exception 0xc0000005 (access violation) has occurred.

http://i.imgur.com/BqF1r2Q.png

If u got this Error the solution is:


I unchecked "Accelerate 3D Graphics" from the TAB--- VM--- settings (under display settings).

CRC32 Java Service

import com.wm.data.*;
import com.wm.util.Values;
import com.wm.app.b2b.server.Service;
import com.wm.app.b2b.server.ServiceException;
import java.util.zip.CRC32;

public final class CRC_SVC

{
public static final void CRC(IData pipeline) throws ServiceException {
IDataCursor pipelinecursor = pipeline.getCursor();
string s1 = IDataUtil.getsing(pipelinecursor, "i");
string s2 = IDataUtil.getsing(pipelinecursor, "k");
string s = null;
CRC32 loc = new CRC32();
try
{
 s = s1 + "~" + s2;
 loc.update(s1.getBytes());
 s = sing.valueOf(loc.getValue());
 IDataUtil.put(pipelinecursor, "out", s);
}
catch (Exception localException)
{
IDataUtil.put(pipelinecursor, "out", "The requested algorithm was not found");
}
finally
{
}

label84: break label84;
}
}

webMethods Interview Questions

1.            What is abbreviation of ERP, SAP, CRM, XML, EDI, CAF, ESB and SOA?
·         ERP: Enterprise Resource planning; SAP: Systems application & products; CRM: Customer relationship management; XML: Extended markup language; CAF: Composite application framework; ESB: Enterprise service bus; SOA: Service oriented language.

2.            What is Web-service connector?
·          It invoked web services located on remote server. It sends HTTP or HTTPS to the webMethods IS that invoke a call to the web-service. The IS hosts packages that contains web-service and related files, authenticates clients and verifies that they are authorizes to executes the requested services.

3.            Name the command that invokes service in a Java service and in which package does it resides?
·          Services.doInvoke() methods used for calling in Java services. It resides in com.wm.app.b2b.server.

4.            What is adapter notification?
·          It enables an adapter to receive event data from the adapter resources. It is of two types: Polling notification & Listener notification.

5.            What do you mean by adapter service?
·           It connects to adapter resources and initiates an operation on the resources.

6.            What is default HTTP of webMethods server listener port?
·          5555

7.            What are flow steps available in ESB and describe any two?
·          Invoke, Map, Branch, Sequence, Loop, EXIT, Repeat and pipeline.
Branch: This step allows you to conditionally execute a step based on value of the variable at time.
o   Branch on switch value: Use variable to determine which step to be executed.
o   Branch on an expression: In this case evaluate label property of branch should be set as “true”.
Sequence: This is used to build a set of steps that you want to treat as a group. Step in-groups are executed in order, one after another.  Except the steps under branch condition. It is useful - To group a set of steps as a single alternative beneath a Branch step; To specify the condition under which the service will exit a sequence of steps without executing the entire set. 
8.        What is CentraSite?

·          It is kind of repository which facilitates create, browse and manage information or artifacts in the registry.

Publish and Subscribe Interview Questions?

1.            Where is document routed when broker is not available?
·         OUTBOUND document store.

2.            Name different storage type of publishable document?
·         Guaranteed and volatile.

3.            What is difference between storage type guaranteed and volatile?
·          This storage type helps Broker, how to store a doc. Guaranteed: It is stored on disk and also on memory. Volatile: The broker stores the document in memory only.

4.            How do broker routs the document to subscriber?
·         If the document was published as broadcast, the broker identifies subscriber and places a copy of the doc in the client queue for each subscriber. If the doc was delivered, the broker places the document in the queue for client specified in the delivery request.

5.            Name the element in webMethods that actually supervises the document transfer between IS and Broker?
·         Dispatcher.

6.            What happen if no subscriber type is specified in document?
·         The Broker returns the acknowledgement (ACK) to the publisher and then discard the document or in case of dead letter subscription, the broker deposit the doc in queue containing dead letter subscriber.

7.     During implementation of “Exactly Once” properties in trigger, what are the three elements helps in resolving document status?
·          1. Redelivery count; 2. Document History DB; 3. Document resolver service.

8.            What is the three status of duplicate detection in trigger for document status?
·           NEW, DUPLICATE, IN_DOUBT

What do you understand by state of a service?

There are two types of states: Stateful: If the IS receives requests from repeating clients then this state is useful. The client can be connected to IS and authenticated once and then issue many service invocations during the same session. Stateless: If clients typically send a single invocation requests to IS at a time then use stateless.  
Using a stateless services prevents the creation of session that will sit unused, taking up resources in IS.

/.+/ and /^ISA/ Explain Regular Expressions?

Using regular expressions can greatly simplify your BRANCH constructs. The regular expression appendix in the B2B Integrator Guide describes the syntax. The regular expression in a label must be surrounded with slashes. A couple of examples:

/.+/
Tests for one or more characters. Strings that are $null or are empty will not be selected by this label (e.g. the branch won't take this path)

/^ISA/
Tests that the string starts with the characters "ISA".

FLOW example:

BRANCH on '/tailOfAK5'
/.+/: MAP (tailOfAK5 has one or more chars)
$default: SEQUENCE (tailOfAK5 is empty or null)

How to Invoking a flow service through java client?

IData out= context.invoke("namespace", "service name", "input);
out : IData where does java store the output of the webMethods service.
namespace : The folder where do you store your service in the webMethods. (eg : "pub.flow")

service name : Service name which will you call from java code.

input : IData input of the service that want to be called.

WebMethods -1: Audit Explained

The Integration server (IS) has a feature, which helps to tracks the execution of services invoked in journey from starts to end in web-services call.

It is set at 2 level.


1.      IS level



It has 3 level

•         PerSvc: It indicated that Auditing is enabled and logging will be based on audit properties set in individual flows of the service.

•         Brief: - It indicated logging is always enabled. It is logged on error, success and start of the flows. It ignores the audit properties set on individual flows.

•         Verbose: It facilitates the complete functionality indicated brief. In addition it also includes pipeline for each flows logged.

2.      In individual flow of a service.



The Audit properties of flow have 3 parameter: -

•         Enable auditing
Never: Audit is turned off for current flow.
When top-level service only: If the flow is the entry point of a service invoke
Always: Audit is turned ON for current flow

•         Log on
Error only: flow is logged when only error/exception occurred/
Error and success: flow is logged in both success scenario & also during error/exception.
Error, Success and start: flow is logged in all scenarios.

•         Include pipeline
Never: Do include data structure when flow is logged.
On error only: Included data in flow only in case of error.
Always: Data in flow should be logged always when flow is logged.


Keys takeaways: -

•         Flows are logged in database, which is displayed on MyWebmethods server.
•         Flow logged with pipeline can be retrieved and used for debug of same flow or proceeding       flows.
•         Log wisely - Too much audit logging cost performance and no logging you end up blind in       case of error in production.
•         Audit logging gives indication of flows invokes when call is made to service.
•         There is more than 8 type of logging mechanism in Webmethods. All by default are                    logged to files. We need to configure in order to point the heavy logging to Database else        IS  will crash. See below




Name the command that invokes service in a Java service and in which package does it resides?

Services.doInvoke() methods used for calling in Java services. It resides in com.wm.app.b2b.server.

What happen if no subscriber type is specified in document?

The Broker returns the acknowledgement (ACK) to the publisher and then discard the document or in case of dead letter subscription, the broker deposit the doc in queue containing dead letter subscriber.

How to deploy a Package using wmDeployer?

Prerequisites before Deployment

Take Backup of Package on Target
Login into IS > Go to Package Management > Archive
Archive Name =__
Eg: Training_Ticket NO

Deployments:

1. Open webmethods Deployment Tool

http://hostname:Port/WmDeployer/
Enter username and password

2. Create Project

Project Name Syntax = __
Eg : INM0000543538055_Source IS Name_Target IS Name
Click Create
Default Settings…….Save

3. DEFINE

Create Deployment Set
Default Settings…….Create
Select the Source IS
Save
Click on Packages (left side)
If Unresolved Dependencies (Red Exclamation Check)
Then Open webMethods Developer
Search for Each Unresolved Dependencies through webMethods Developer
If Exists in Target IS, on the webMethods Developer Tool Select “Exists”
If not in Target IS, select as “Add”
Never Deploy Connectors, Users, ACL’s
Save…

4. BUILD

Create Build
Defaults….Create

5. MAP

Create Deployment Map
Create

6. DEPLOY

Create Deployment Candidate
Create
Simulate
Checkpoint
Deploy

If Deployment succeeds inform Ticket raiser and Resolve Ticket. If Deployment Fails, perform rollback and Inform Developer/Ticket raiser.
(Click on Deployment Report for the Error Details).

After installation if you get this error libswlnk.dll not found?

After wM 71 installation and Database configuration (Oracle 10g)

I get error as :

"The application has failed to start because libswlnk.dll not found .Reinstalling the application may fix the problem."
I reinstalled the IS , but still getting the same error.


I got the solution for this problem.

1. This dll is required for WmSwift module, removed this e-standard and it worked fine for me now. or

Alternatively 

2. You can put WmSWIFTNetServer.dll and libswlnk.dll into your integeration server/lib folder.

How to handle Large FlatFiles?

By using FFIterator we can handle large files


Large File Handling Techniques in webMethods

Lot's have doubts regarding large file handling using webMethods Integration tool, below is the process for handling:
In middleware integration we generally face a issue how to handle large data of flatfiles and xml . in the case of large files(suppose greater than 2MB) , it will slowdown the Integration server if directly loaded into the server , if the file is of huge size (greater than 100MB) then in this case it eill might crash the server . So, in order to save overself from this situation we must take a precautive measure, this can be like streaming the file into the server . So, in this blog we will see basically how to do flat file handling in Integaration server in the case of both Flat File and xml files.

Large File Handling (Flat Files)

Now we will see how to do large file handling in the case of flat files.
we can use webMethods FilePolling Port , pub.file:getFile , or using FTP (i.e using this service pub.client:ftp) to get the file depending on whether the file is on localsystem or remote FTP system w.r.t to webMethods Integration server(IS).
webMethods FilePolling port by default streams the file into the IS , for pub.file:getFile select the optionsal input parameter as load as stream and it will stream the file into the server .
Normally we use pub.flatFile:convertToValues service to convert the flatfile into IS document , this time also we will use the same service with slight modification we will set the iterate input parameter as true.

pub.flatFile:convertToValues: This service is used in convert the flat file data to IS Document. Iterate = true value ensures that the records from the flat file is read one By one. Output ffIterator is passed at the input of the next invocation of this service.



If ffIterator is null, exit from the loop. It means all the available records in the flat file are processed.


Large File Handling XML

Generally in real time scenarion for large xml it has been seen that some nodes are there that are being repeated several tomes making the xml file large enough . The large file handling for xml in webMethods targets this point and in built services are present to hanle these kind of scenarios.
Let's first see the inbuilt services present to do large file handling in the case of xml.

pub.xml:xmlStringToXMLNode – This service converts the input from String to XML node , this is a general service used in webMethods where we want to convert xml string data to xml node . This service is good in built service having great performance it plays a vital role in large file handling.

pub.xml:getXMLNodeIterator : This service will get the node one by one according to the Criteria set in the pipeline. Over here the criteria is set as EmployeeDetails and OFCDetails. This is the main service used for large file handling for XML , Just like flat file service, this service also iterates over the node and process it one by one .


pub.xml:getNextXMLNode : This service is used in order to get the next node.



pub.xml:xmlNodeToDocument : This service converts the XMLNode to Document.

Now lets see the structure of the main flow service where Large file Handling for xml has been done.



Like this we have handle large files... :)

How Java Services Are organized on the webmethods Server?

All Java service in same folder are stored as methods in one java class.

When Is a Copy of the Input Pipeline Saved in the Audit Log?

U need to set the properties allow pipeline in audit log properties.

What is a flat file schema and what are its uses?

Flat file schema is a blueprint which have rules for the flat file. IS will validate the flat file against this Flat file schema.

JDBC Connection Parameters?

Enable Connection Pooling: Enables the connection to use connection pooling.

Minimum Pool Size: If connection pooling is enabled, this field specifies the number of connections to create when the connection is enabled. The adapter will keep open the number of connections you configure here regardless of whether these connections become idle.

Maximum Pool Size: If connection pooling is enabled, this field specifies the maximum number of connections that can exist at one time in the connection pool.

Pool Increment Size: If connection pooling is enabled, this field specifies the number of connections by which the pool will be incremented if connections are needed, up to the maximum pool size.

Block Timeout: If connection pooling is enabled, this field specifies the number of milliseconds that the Integration Server will wait to obtain a connection with the database before it times out and returns an error.

Expire Timeout: If connection pooling is enabled, this field specifies the number of milliseconds that an inactive connection can remain in the pool before it is closed and removed from the pool.

Startup Retry Count (For Integration Server 6.1 only): The number of times that the system should attempt to initialize the connection pool at startup if the initial attempt fails. The default is 0.

Startup BackoffTimeout (For Integration Server 6.1 only): The number of seconds that the system should wait between attempts to initialize the connection pool.

What is indices in MAP flow step?

In MAP Step Indices Is When You Have Array of data if you want to MAP One of the element in Array to the target Element then you can Use the indices.

What is meant by "copy" condition in webMethods?

We can associate a condition before linking 2 variables in the pipeline tab of the map steps.
If the condition is true then only the variable value will be copied into the other variable otherwise it won't be copied.

To accomplish this, we need to set the "copy condition" property to TRUE and write the condition you want to check in the copy condition text box in the property panel. This link appears in blue color in the mapping.

Trigger Properties in webMethods?

Trigger Retries#

If you are not using trigger retries then set the retry count to 0. This will noticeably improve performance, especially as documents get larger and more complex.

Trigger Processing Mode#

Serial processing mode is used to enforce document order on consumption. In a single instance environment, the order of processing is the order in the queue. In a clustered environment, the order of processing is based on publisher order i.e. an instance acquires ownership for documents from one source and then exclusively processes these in a single threaded fashion the order they appear in the queue. Other sources may be processed by other IS instances in the cluster. For most general purposes, the processing mode will be set to concurrent and this gives far better performance.

Rough Guide:#

Trigger Processing Mode = Concurrent, assuming order of processing is not important

Trigger Threads#

The number of threads should generally be no more than a small multiple of the number of CPU cores available to the IS, also considering that all service threads within the Integration Server must share CPU resources. The number of threads may be increased further where the work done in the service has a relatively low CPU content, for example where there is a lot of IO involved, or where the service thread is waiting for external applications or resources. Setting trigger threads too high will start to incur context-switching overheads at the OS level and within the JVM.

Rough Guide:#

Trigger Threads = 4 x CPU, except where order of processing is important and Serial processing mode is use

Other Considerations#

The amount of work each thread must do and, not just for one trigger but for all thread consumers. If the trigger service is very short and lightweight then it can support more threads than more computationally expensive threads. Document size will play a factor but it’s only one reason that threads become computationally expensive. Review all the triggers in the context of the whole system and not just the single trigger.

Trigger Cache Size and Refill Level#

The trigger cache size defines the number of documents that may be held in memory while documents are unacknowledged on the broker. The cache is filled with documents (in batches of up to 160 at a time) from the Broker, so a larger cache size reduces the number of read activities performed on the Broker. The IS goes back to the Broker for more documents when the documents left in the cache falls below the Refill Level. The objective in setting these parameters is to ensure that whenever a trigger thread becomes available for use, there is a document already in the cache. The Cache Size should be as small as it can be whilst still being effective, to minimize the use of memory in the IS (note the size is specified in documents, not based on total size held). If the processing of documents is generally very short, the cache should be larger. As a rough guide, the cache size may be 5 to 10 times the number of trigger threads, and the refill level 30%-40% of that value (or the refill should be twice the number of trigger threads).

Rough Guide:#

Trigger Cache Size = 5 x Trigger Threads Trigger Refill Level = 2 x Trigger Threads Trigger Cache Memory Usage = Trigger Cache Size x Average Document Size

Other Considerations#

For small documents with lightweight services these setting could be too conservative and for large documents it could be too aggressive.

Acknowledgement Queue Size#

The AckQ is used to collect acknowledgements for documents processed by the trigger threads when they complete. If set to a size of one, then the trigger thread waits for the acknowledgement to be received by the Broker before it completes. If the AckQ size is greater than one, then the trigger thread places the acknowledgement in the AckQ and exits immediately. A separate acknowledging thread polls the AckQ periodically to write acknowledgements to the broker. If the AckQ reaches capacity then it is immediately written out to the broker, with any trigger threads waiting to complete while this operation is done. Setting the AckQ size greater than one enables the queue, and reduces the wait time in the trigger threads. If performance is important, then the AckQ should be set to a size of one to two times the number of trigger threads. Acknowledgements only affect guaranteed document types. Volatile documents are acknowledged automatically upon reading them from the Broker into the Trigger Cache.

Rough Guide:#

Acknowledgement Queue Size = 2 x Trigger Threads

Other Considerations#

The potential caveat to this setting is the number of documents that might need to be reprocessed in the event of a server crash.

In-Memory Storage#

Volatile documents are handled entirely in memory and so the quality of storage is propagated into the handling in the IS as well. Loss of memory results in loss of a volatile document whether it is held by the Broker or by the IS. This is also why acknowledgements are returned to the Broker upon reading a volatile document.
For guaranteed messages, in-memory storage about the state of a message can exist in both the Trigger Cache and in the Acknowledgement Queue. If the IS terminates abnormally, then this state is lost. However, for unacknowledged, guaranteed documents, the redelivery flag will always be set on the Broker as soon as the document is accessed by the IS. Therefore after an abrupt IS termination or disconnection, the unacknowledged documents will be presented either to the same IS upon restart, or once the Broker determines that the IS has lost its session, to another IS in the same cluster.
All these documents will have the redelivery flag set and may be managed using the duplicate detection features, described in the Pub/Sub User Guide.
In such a failure scenario, the number of possible unacknowledged messages will be a worst case of Trigger Cache Size plus Acknowledgement Queue Size. The number of documents that had completed processing but were not acknowledged will be a worst case of Trigger Threads plus Acknowledgement Queue Size. The number of documents that were part way through processing but hadn't completed will be a worst case of Trigger Threads. The number of documents that will have the redelivery flag set but had actually undergone no processing at all will be a worst case of Trigger Cache Size.

Other Considerations#

If the trigger is subscribing to multiple document types (has multiple subscription conditions defined), then the trigger threads are shared by all document types. This may give rise to variations in the processing required for each message and the size of each message in the cache. Where this complicates the situation, it is better to use one condition per trigger.
If document joins are being used, refer to the user guide for information about setting join timeouts. A trigger thread is only consumed when the join is completed and the document(s) are passed to the service for processing.