How to deploy a Package using wmDeployer?

Prerequisites before Deployment

Take Backup of Package on Target
Login into IS > Go to Package Management > Archive
Archive Name =__
Eg: Training_Ticket NO

Deployments:

1. Open webmethods Deployment Tool

http://hostname:Port/WmDeployer/
Enter username and password

2. Create Project

Project Name Syntax = __
Eg : INM0000543538055_Source IS Name_Target IS Name
Click Create
Default Settings…….Save

3. DEFINE

Create Deployment Set
Default Settings…….Create
Select the Source IS
Save
Click on Packages (left side)
If Unresolved Dependencies (Red Exclamation Check)
Then Open webMethods Developer
Search for Each Unresolved Dependencies through webMethods Developer
If Exists in Target IS, on the webMethods Developer Tool Select “Exists”
If not in Target IS, select as “Add”
Never Deploy Connectors, Users, ACL’s
Save…

4. BUILD

Create Build
Defaults….Create

5. MAP

Create Deployment Map
Create

6. DEPLOY

Create Deployment Candidate
Create
Simulate
Checkpoint
Deploy

If Deployment succeeds inform Ticket raiser and Resolve Ticket. If Deployment Fails, perform rollback and Inform Developer/Ticket raiser.
(Click on Deployment Report for the Error Details).

After installation if you get this error libswlnk.dll not found?

After wM 71 installation and Database configuration (Oracle 10g)

I get error as :

"The application has failed to start because libswlnk.dll not found .Reinstalling the application may fix the problem."
I reinstalled the IS , but still getting the same error.


I got the solution for this problem.

1. This dll is required for WmSwift module, removed this e-standard and it worked fine for me now. or

Alternatively 

2. You can put WmSWIFTNetServer.dll and libswlnk.dll into your integeration server/lib folder.

How to handle Large FlatFiles?

By using FFIterator we can handle large files


Large File Handling Techniques in webMethods

Lot's have doubts regarding large file handling using webMethods Integration tool, below is the process for handling:
In middleware integration we generally face a issue how to handle large data of flatfiles and xml . in the case of large files(suppose greater than 2MB) , it will slowdown the Integration server if directly loaded into the server , if the file is of huge size (greater than 100MB) then in this case it eill might crash the server . So, in order to save overself from this situation we must take a precautive measure, this can be like streaming the file into the server . So, in this blog we will see basically how to do flat file handling in Integaration server in the case of both Flat File and xml files.

Large File Handling (Flat Files)

Now we will see how to do large file handling in the case of flat files.
we can use webMethods FilePolling Port , pub.file:getFile , or using FTP (i.e using this service pub.client:ftp) to get the file depending on whether the file is on localsystem or remote FTP system w.r.t to webMethods Integration server(IS).
webMethods FilePolling port by default streams the file into the IS , for pub.file:getFile select the optionsal input parameter as load as stream and it will stream the file into the server .
Normally we use pub.flatFile:convertToValues service to convert the flatfile into IS document , this time also we will use the same service with slight modification we will set the iterate input parameter as true.

pub.flatFile:convertToValues: This service is used in convert the flat file data to IS Document. Iterate = true value ensures that the records from the flat file is read one By one. Output ffIterator is passed at the input of the next invocation of this service.



If ffIterator is null, exit from the loop. It means all the available records in the flat file are processed.


Large File Handling XML

Generally in real time scenarion for large xml it has been seen that some nodes are there that are being repeated several tomes making the xml file large enough . The large file handling for xml in webMethods targets this point and in built services are present to hanle these kind of scenarios.
Let's first see the inbuilt services present to do large file handling in the case of xml.

pub.xml:xmlStringToXMLNode – This service converts the input from String to XML node , this is a general service used in webMethods where we want to convert xml string data to xml node . This service is good in built service having great performance it plays a vital role in large file handling.

pub.xml:getXMLNodeIterator : This service will get the node one by one according to the Criteria set in the pipeline. Over here the criteria is set as EmployeeDetails and OFCDetails. This is the main service used for large file handling for XML , Just like flat file service, this service also iterates over the node and process it one by one .


pub.xml:getNextXMLNode : This service is used in order to get the next node.



pub.xml:xmlNodeToDocument : This service converts the XMLNode to Document.

Now lets see the structure of the main flow service where Large file Handling for xml has been done.



Like this we have handle large files... :)

How Java Services Are organized on the webmethods Server?

All Java service in same folder are stored as methods in one java class.

When Is a Copy of the Input Pipeline Saved in the Audit Log?

U need to set the properties allow pipeline in audit log properties.

What is a flat file schema and what are its uses?

Flat file schema is a blueprint which have rules for the flat file. IS will validate the flat file against this Flat file schema.

JDBC Connection Parameters?

Enable Connection Pooling: Enables the connection to use connection pooling.

Minimum Pool Size: If connection pooling is enabled, this field specifies the number of connections to create when the connection is enabled. The adapter will keep open the number of connections you configure here regardless of whether these connections become idle.

Maximum Pool Size: If connection pooling is enabled, this field specifies the maximum number of connections that can exist at one time in the connection pool.

Pool Increment Size: If connection pooling is enabled, this field specifies the number of connections by which the pool will be incremented if connections are needed, up to the maximum pool size.

Block Timeout: If connection pooling is enabled, this field specifies the number of milliseconds that the Integration Server will wait to obtain a connection with the database before it times out and returns an error.

Expire Timeout: If connection pooling is enabled, this field specifies the number of milliseconds that an inactive connection can remain in the pool before it is closed and removed from the pool.

Startup Retry Count (For Integration Server 6.1 only): The number of times that the system should attempt to initialize the connection pool at startup if the initial attempt fails. The default is 0.

Startup BackoffTimeout (For Integration Server 6.1 only): The number of seconds that the system should wait between attempts to initialize the connection pool.

What is indices in MAP flow step?

In MAP Step Indices Is When You Have Array of data if you want to MAP One of the element in Array to the target Element then you can Use the indices.

What is meant by "copy" condition in webMethods?

We can associate a condition before linking 2 variables in the pipeline tab of the map steps.
If the condition is true then only the variable value will be copied into the other variable otherwise it won't be copied.

To accomplish this, we need to set the "copy condition" property to TRUE and write the condition you want to check in the copy condition text box in the property panel. This link appears in blue color in the mapping.

Trigger Properties in webMethods?

Trigger Retries#

If you are not using trigger retries then set the retry count to 0. This will noticeably improve performance, especially as documents get larger and more complex.

Trigger Processing Mode#

Serial processing mode is used to enforce document order on consumption. In a single instance environment, the order of processing is the order in the queue. In a clustered environment, the order of processing is based on publisher order i.e. an instance acquires ownership for documents from one source and then exclusively processes these in a single threaded fashion the order they appear in the queue. Other sources may be processed by other IS instances in the cluster. For most general purposes, the processing mode will be set to concurrent and this gives far better performance.

Rough Guide:#

Trigger Processing Mode = Concurrent, assuming order of processing is not important

Trigger Threads#

The number of threads should generally be no more than a small multiple of the number of CPU cores available to the IS, also considering that all service threads within the Integration Server must share CPU resources. The number of threads may be increased further where the work done in the service has a relatively low CPU content, for example where there is a lot of IO involved, or where the service thread is waiting for external applications or resources. Setting trigger threads too high will start to incur context-switching overheads at the OS level and within the JVM.

Rough Guide:#

Trigger Threads = 4 x CPU, except where order of processing is important and Serial processing mode is use

Other Considerations#

The amount of work each thread must do and, not just for one trigger but for all thread consumers. If the trigger service is very short and lightweight then it can support more threads than more computationally expensive threads. Document size will play a factor but it’s only one reason that threads become computationally expensive. Review all the triggers in the context of the whole system and not just the single trigger.

Trigger Cache Size and Refill Level#

The trigger cache size defines the number of documents that may be held in memory while documents are unacknowledged on the broker. The cache is filled with documents (in batches of up to 160 at a time) from the Broker, so a larger cache size reduces the number of read activities performed on the Broker. The IS goes back to the Broker for more documents when the documents left in the cache falls below the Refill Level. The objective in setting these parameters is to ensure that whenever a trigger thread becomes available for use, there is a document already in the cache. The Cache Size should be as small as it can be whilst still being effective, to minimize the use of memory in the IS (note the size is specified in documents, not based on total size held). If the processing of documents is generally very short, the cache should be larger. As a rough guide, the cache size may be 5 to 10 times the number of trigger threads, and the refill level 30%-40% of that value (or the refill should be twice the number of trigger threads).

Rough Guide:#

Trigger Cache Size = 5 x Trigger Threads Trigger Refill Level = 2 x Trigger Threads Trigger Cache Memory Usage = Trigger Cache Size x Average Document Size

Other Considerations#

For small documents with lightweight services these setting could be too conservative and for large documents it could be too aggressive.

Acknowledgement Queue Size#

The AckQ is used to collect acknowledgements for documents processed by the trigger threads when they complete. If set to a size of one, then the trigger thread waits for the acknowledgement to be received by the Broker before it completes. If the AckQ size is greater than one, then the trigger thread places the acknowledgement in the AckQ and exits immediately. A separate acknowledging thread polls the AckQ periodically to write acknowledgements to the broker. If the AckQ reaches capacity then it is immediately written out to the broker, with any trigger threads waiting to complete while this operation is done. Setting the AckQ size greater than one enables the queue, and reduces the wait time in the trigger threads. If performance is important, then the AckQ should be set to a size of one to two times the number of trigger threads. Acknowledgements only affect guaranteed document types. Volatile documents are acknowledged automatically upon reading them from the Broker into the Trigger Cache.

Rough Guide:#

Acknowledgement Queue Size = 2 x Trigger Threads

Other Considerations#

The potential caveat to this setting is the number of documents that might need to be reprocessed in the event of a server crash.

In-Memory Storage#

Volatile documents are handled entirely in memory and so the quality of storage is propagated into the handling in the IS as well. Loss of memory results in loss of a volatile document whether it is held by the Broker or by the IS. This is also why acknowledgements are returned to the Broker upon reading a volatile document.
For guaranteed messages, in-memory storage about the state of a message can exist in both the Trigger Cache and in the Acknowledgement Queue. If the IS terminates abnormally, then this state is lost. However, for unacknowledged, guaranteed documents, the redelivery flag will always be set on the Broker as soon as the document is accessed by the IS. Therefore after an abrupt IS termination or disconnection, the unacknowledged documents will be presented either to the same IS upon restart, or once the Broker determines that the IS has lost its session, to another IS in the same cluster.
All these documents will have the redelivery flag set and may be managed using the duplicate detection features, described in the Pub/Sub User Guide.
In such a failure scenario, the number of possible unacknowledged messages will be a worst case of Trigger Cache Size plus Acknowledgement Queue Size. The number of documents that had completed processing but were not acknowledged will be a worst case of Trigger Threads plus Acknowledgement Queue Size. The number of documents that were part way through processing but hadn't completed will be a worst case of Trigger Threads. The number of documents that will have the redelivery flag set but had actually undergone no processing at all will be a worst case of Trigger Cache Size.

Other Considerations#

If the trigger is subscribing to multiple document types (has multiple subscription conditions defined), then the trigger threads are shared by all document types. This may give rise to variations in the processing required for each message and the size of each message in the cache. Where this complicates the situation, it is better to use one condition per trigger.
If document joins are being used, refer to the user guide for information about setting join timeouts. A trigger thread is only consumed when the join is completed and the document(s) are passed to the service for processing.

Difference between custom sql ,dynamic sql ? when we use custom sql,when we use dynamic sql?

Custom sql--> Doesn't takes values during run time.
Dynamic sql -->Can pass the values during run time. 

Custom SQL and Dynamic SQL we have to write the queries explicitly.The main difference between Custom SQL and Dynamic SQL is, in Custom SQL we can give the input values at design time. In Dynamic SQL we can give the input values at run time. 

By using custom SQL, one can execute any static SQL statements but by using dynamic we can able to execute only input field which query you set. 

In CustomSQL, you can pass inputs to your SQL query at runtime. With DynamicSQL, you can pass your entire SQL statement, or part of your SQL statement (like the where clause) can be passed at runtime; along with inputs to it. In simple words, with dynamicSQL you can dynamically build your SQL statement at runtime. 

Customizing The SQL QUERY is fixed with input variable that are passed to the custom adapter service. But in runtime you use dynamic SQL if SQL query changes during the runtime in this cases you prepare the sql query and pass it to dynamic adapter service in the runtime.

Trigger Acknowledgement Queue Size?

Acknowledgement Queue Size#
The AckQ is used to collect acknowledgements for documents processed by the trigger threads when they complete. If set to a size of one, then the trigger thread waits for the acknowledgement to be received by the Broker before it completes. If the AckQ size is greater than one, then the trigger thread places the acknowledgement in the AckQ and exits immediately. A separate acknowledging thread polls the AckQ periodically to write acknowledgements to the broker. If the AckQ reaches capacity then it is immediately written out to the broker, with any trigger threads waiting to complete while this operation is done. Setting the AckQ size greater than one enables the queue, and reduces the wait time in the trigger threads. If performance is important, then the AckQ should be set to a size of one to two times the number of trigger threads. Acknowledgements only affect guaranteed document types. Volatile documents are acknowledged automatically upon reading them from the Broker into the Trigger Cache.
Rough Guide:#

Acknowledgement Queue Size = 2 x Trigger Threads

Trigger Processing Mode?

Situation: Clustered environment with 2 ISs and a trigger on both IS. This Cluster will have a client on Broker.
If a trigger is set to Serial – then, only 1 document is processed at a time. Either of IS can pull document from Client queue to its Trigger queue. But, never both.

If a trigger is set to Concurrent – and max Threads =1; Then, both ISs can pull 1 document each at the same time from Client Queue towards their respective Trigger queues.

Define Webservice connector?

Webservice connector is a service that invoke a webservice located on the remote server. Developer uses a WSDL doc to automatically generate.

How do I throw an exception when using a try-catch block?

Set a flag in your catch block or leave a variable holding the error message in the pipeline.
Outside the catch block put a branch on that variable or flag and if it is non-null then exit with failure or call the service that generates the exception.

What Is the Pipeline?

The pipeline is the general term used to refer to the data structure in which input and output values are maintained for a flow service. It allows services in the flow to share data. The pipeline starts with the input to the flow service and collects inputs and outputs from subsequent services in the flow. When a service in the flow executes, it has access to all data in the pipeline at that point.

In which case the transformers should not be used? and In which case we are not advised to use Transformers?

The output of one transformer cannot be used as the input of another transformer in the same MAP step.
Transformers in a MAP step are independent of each other and do not execute in a specific order. When inserting transformers, assume that webMethods Integration Server concurrently executes the transformers at run time.

When exception occurs in a transformer, it swipes off the pipeline values and 'lastError' will not have the service stack too. So Transformer is not advised when there is a possibility of exception from the service used as transformer. Ex: addInts service, this would throw exception when there is 'null' input. So if there is a possibility that one value may be null, do not use transformer for this value transformation. Also addInts is a common service that can be invoked by many services. When an exception is caused by addInts being used as transformer - you can guess how tough it would be to track down the service that failed with the exception.

The above answer is partial correct and this is also one of the pitfalls of get last error. It will catch the latest exception and overwrites the previous exception, so we can’t guarantee all the exceptions are caught and accounted in getLastError.

How to create a link between variables?

You have to map the Input Pipeline variable A to Output Pipeline variable B.
Steps
1. Create any new Flow service
2. Add some Input and Output variable as required
3. Insert any previously created service or insert any Map
4. Click at Pipeline In area then click on the variable A,
5. Click at PipelineOut area then click on the variable B,
6. Now click over link button.
Note: Before this, service at which you are working must be locked.

How can you find file name in file-polling concept?

In file polling concept we do polling on particular directory based on a service. And how we find the file name is in our project client would place file in shared directory n we will fetch file through front end application like FTP service by providing credentials.


Ans: pub.flow:getTransportInfo service u can use to get the file name to your IS.

What Are Transformers?

Transformers are the services you use to accomplish value transformations on the Pipeline tab.
You can only insert a transformer into a MAP step.
You can use any service as a transformer.
This includes any Java, C or flow service that you create and any built-in services in WmPublic, such as the
pub.date.getCurrentDateString and the pub.string.concat services. By using transformers, you can invoke multiple services (and perform multiple value transformations) in a single flow step.

What Are Event Handlers?

Event handlers are the services that you write to perform some actions when a specific event occurs. 

If i have to move packages from one IS to another, which process would u suggest, is it through wmDeployer or some other processes like publish/subscribe?

If we want to move any one or two packages publish/subscribe is OK. If we want to do code migration the best is to use wMDeployer. 

Can u explain about trading networks?

A trading network is a set of organizations that have agreed to exchange the business documents. A webMethods Trading Networks is a component that runs on webMethods Integration Server. Trading Networks enables your enterprise with other companies and market places to from a business-to-business trading network.

Trading Networks is a component of WebMethods. The main purpose of this component is to interact with out-side organizations to exchange business information securely. It is providing couple of functions like filtering documents, routing documents, save documents to DB, reprocessing and etc..,

It is supporting document formats like EDI, Flat File, XML, Rosetta net etc.

What Is a Replication Service?

Replication Service is one that IS automatically executes when it prepares to replicate a package.

Locking and Unlocking of Java Services

Locking and unlocking actions on Java and C/C++ services are folder-wide. All Java and C/C++ services in a folder share the same .java and .class files on the Integration Server. These files, located in the \code subdirectory of a package, correspond to all services (except flow services) in a folder. Therefore, when you lock a Java/C service, all Java/C services in that folder are locked.

For example, if you lock a Java service in a folder A, all Java and C/C++ services in folder A are locked by you. Similarly, if another user has locked a Java service in folder B, you cannot add, edit, move, or delete any Java or C/C++ services in folder B. Locking actions on Java and C/C++ services are ACL dependent. If you want to lock one or more Java or C/C++ services within a folder, you must have Write access to all Java and C/C++ services in that folder. This is because Java and C/C++ services within a folder share the same .java and .class files.
The jcode development environment operates independently of locking. If you use jcode to develop Java services, you do not have the locking functionality that is available in the Integration Server. When you use jcode, you may compile a service that is locked by another user, overwriting that user’s changes to the service. Therefore, if you use jcode, do not use the locking features in the Integration Server.

Before you save a Java or C/C++ service, multiple corresponding files must be writable on the server.A single Java or C/C++ service corresponds to the following files:
.java
.class
.ndf
.frag (may not be present)
Before you save a Java or C/C++ service, all of the preceding files must be writable. Therefore, make sure that all system locks are removed from those files before saving.

Can we multi-select elements to lock or unlock in the Navigation Panel?

yes
But your selection does not contain:
1.       server
2.       folder or package and its content
3.       package and any other element

4.       adapter notification record 





How to use SEQUENCE as the Target of a BRANCH?

Set evaluate label property of branch step to true.

Then set the label property of sequence with the value on which it needs to be processed. 

we can use the branch in the context of switch case also, for that we have to set the evaluate label property of the branch to false and the set the value for the SWITCH property of the branch as the variable based on which you want to switch and set the label property of the branch as the possible value for the variable you gave in switch.

What are Structural transformations?

Splitting one field into several or merging fields, reordering portions of a message or renaming fields are known as structural transformations.

Can u please tell me about webMethods in brief?

webMethods is Enterprise Application Integration tool. It can be used to integrate applications within organization. It can also be used to integrate third party/vendor applications.
webMethods suite of application can be used to implement ESB, BPM.

webMethod is a middleware solution used to integrate different business applications. 

What Is a Shutdown Service?

A Shutdown service is the one we can write and add it to the shutdown services list. These services will be executed when the server shuts down. In this service we can write codes to close the network connections, releasing memory etc..

Shutdown service is one that IS automatically executes when it unloads a package 

Join types in Trigger? Explain?

There are three types of join in trigger.

1) ALL (AND),
2) Any (OR),
3) Only one (XOR)

1) All (AND) -> Trigger invoke service when all document are present in IS (Integration Server).
2) Any (OR) -> When any one document is present integration server invoke the service.
3) Only One (XOR) -> When specify document is available integration server invoke service. 

All (AND) The Integration Server invokes the trigger service when the server receives an instance of each specified publishable document type within the join time‐out period. The instance documents must have the same activation ID. This is the default join type.

Any (OR) The Integration Server invokes the trigger service when it receives an instance of any one of the specified publishable document types.

Only one (XOR) The Integration Server invokes the trigger service when it receives an instance of any of the specified document types. For the duration of the join time‐out period, the Integration Server discards any instances of the specified publishable document types with the same activation ID.

We have created an insert notification, is there any way to process the document in the subscriber without publishing it?

When you create an Insert Notification or any other, It by default creates a Publishable document associated with the notification and when a DB change Happens, then the activity will be notified by the corresponding Notification as a Publishable document to Broker. This is an auto Publish.

Just create a trigger with this document type and a subscribing service; you would not need to write a publish step in the Subscribing service again. 

What is Developer?

All the services and elements reside on the Integration server. Developer is a GUI by which we can create, edit, delete, and test an element in IS. We can even trace through the flow steps in a developer. Developer connects to the server through the HTTP port configured on IS.

Can u explain about Pub-Sub Architecture, where do u implemented this?

1) Point-to-point Architecture
2) Point-to-Multiple Architecture

1) Point-to-point Architecture
In this architecture source system will publish the data and target system will receive the data.

2) Point-to-Multiple Architecture

In this Source system will publish the data to Broker, from broker multiple target systems will subscribe.

Publisher publishes a document broker then the subscriber receive the document from broker. You should make document as publishable at publisher side.

Subscriber should subscribe to that particular document by creating a trigger. The trigger monitors that document. If a subscribed document got published then trigger invokes a service which associates with trigger to handle the document.

This is the general architecture of PUBLISH- SUBSCRIBE model.


What are extended settings?

Extended settings are basically done for specifying values to some of the internal keys of Integration server.

Like we can specify java compiler at this setting and whenever we compile any java service IS will take this compiler only for compiling the java code. 

What is the difference between a systems locked element and a read-only element?

None, system lock is a term used in webMethods platform to denote an element that has read-only files on the webMethods integration server. 

Can U Explain JDBC Adapter Transaction types?

We have 3 types of JDBC Adapter Transactions.

1) NO_TRANSACTION: The connection automatically commits the operations.

2) LOCAL_TRANSACTION: The connection uses local transactions.
If we plan to use the connection with BatchInsertSQL or BatchUpdateSQl adapter services, we must specify LOCAL_TRANSACTIONS types.
If we are configuring a Basic Notification and using the exactly once notification and delete stored records options we must configure the notification to use LOCAL_TRANSACTION type.

3) XA_TRANSACTION: The connection uses XA transactions. When we are connecting to Teradata we use XA transaction.

How to know Who has an Element Locked?

To know who has locked the service or a document or etc.

Right click that particular service or document and select LOCK PROPERTIES from the menu displayed. 

What Is Data Validation?

Data Validation is the process of verifying that run-time data conforms to a predefined structure and format. It also verifies that the run-time data is a specific datatype and falls within a defined range of values. 

What is the variable need to keep in the clearpipeline service?

Preserve.
Clear pipeline service is going to clear the pipeline information.
If we did not specify "preserve" in the service all the pipeline information will be lost.
If we keep variable in pipeline in "preserve" except the variable specified in the preserve all the other information will be lost.

What Is a Package?

A package is a container that is used to bundle services and related elements, such as specifications, IS document types, IS schemas, and output templates. When you create a folder, service, specification, IS document type, IS schema, or output template, you save it in a package. 

Steps of deploying a service in webMethods?

define
build
map
deploy

are the steps used to deploy.


Developer will do upto build and rest map and deploy will do the administrator. 

What Is a Flow Step?

Flow service contains a flow step. Flow step is a basic unit of work that webMethods Integration Server interprets and executes at run time. 

When and why should we use transformers and flow services? How are they different from each other?

When we are performing only one data operation we can go ahead with directly invoking the corresponding flow service. If we want to perform the multiple operations in a single step, it is good to go with transformers in a map step. This increases the memory usage and readability.

Mapping is the process of performing transformations to resolve data representation differences between services or document formats. By linking variables to each other on the Pipeline tab, you can accomplish name transformations and structural transformations. However, to perform value transformations you must execute some code or logic.

Developer provides two ways for you to invoke services: You can insert INVOKE steps or you can insert transformers onto the Pipeline tab. Transformers are the services you use to accomplish value transformations on the Pipeline tab. 

How to debug a flow service in webMethods?

Trace,
Trace to here
Trace into,
Step,

Step into are the things which are used to debug a service.

If we want to run the server with some other port number, what we need to do?

1) we need to copy the Integration server path where the
IS resides & rename it.
2) Go to IS admin.
3) Click Ports tab under security
4) Go to Add port & select (webMethods/http)
5) Go to change primary port select the port & update it

Start webMethods batch/script with the port number you want webMethods to listen for request.

Start the Integration server using the below Command Server.bat -port (port number)
EX:-<Integrationserver_bin directory>\server.bat -port 6565
This command is used start the Integration server on a port specified in the command prompt. 

What is the difference of using transformer in service & direct invoke of service?

Invoking a service using a transformer, you can invoke multiple services in a single map step and every service is invoked using a separate thread thus executing parallel and improving performance.


While invoking a service as a transformer we can map only the required fields.
IF we invoke a service directly, the pipeline values of the called service will be included along with the current pipeline value and have to drop the unwanted variables explicitly.

Is Integration Server is thread or process?

A single process might contains multiple threads; all threads within a process share the same state and same memory space, and can communicate with each other directly, because they share the same variables. By this IS is a process. 

What is difference between groups and ACL GROUPS?

Groups: Can be created of people having same responsibilities. Ex: Developer Group.

ACL: With the help of ACL you can grant access to a group or Individual. With ACL's you can grant access to few in a group.
Ex: A user need not belong to Developer group to develop new flows, but he need to belong to a group which is assigned to Developer ACL.

What are client groups?

Client groups contains a list of clients and it configures which all documents its client could publish and subscribe to.

If i don’t want to repeat then what i need to do?

count=0 in repeat properties it will not repeat 

Scenario: i have a loop A, under which i have a child loop B, under which i have a Branch with few services with a condition. If a condition is satisfied, then i need my branch to exit from loop B, what should i do?

Place an Exit Step inside the branch condition and set the property exit from to "Loop".

And Add to the  Exit step make signal to "success". 

What is the difference between DISABLING a Polling Notification and SUSPENDING a Polling Notification?

With version 6.5, there is Suspended Polling Notification been added. It temporarily stops the activities of a polling notification without data loss.

In suspended: Database trigger and buffer table are not dropped.

In Disabled: Database trigger and buffer table are dropped. 

What is meant by "Block time out" and "Expire time out” in jdbc adapter configuration?

Block timeout: 

Refers to how much time the IS should wait to get connection from connection pool before throwing exception.

Expire Timeout:

Refers to how much time the free connection stay as it is before it expires. 

Explain try and catch block briefly?

To handle exception/error we use try and catch block with the use of sequence.

Remember Exit on
.
.
..........................................
. .
. .
SUCCESS or FAILURE

Exit on Success: If there is error in any subordinate service it will be run.

Service1
Service2
Service3 (here is error)
Service4
Exit on Success will run only 1 services.

Exit on Failure: It will exit on that service which has any error.

Service1
Service2
Service3 (here is error)
Service4
Exit on Failure will run only Service1 and Service2.

Syntax:
Sequence (Main Block) [EXIT ON: SUCCESS]
Sequence (Try Block) [EXIT ON: FAILURE]

Sequence (Catch Block) [EXIT ON: DONE] 

What kind of error will be occurred if u did not mention from to the exit flow step?

Hi, if we did not mention exit from flow in the exit flow step, then it will throws Flow Exception.
By default, loop will be assumed and will exit from the loop if the exit step is present inside a loop.