Wednesday, August 30, 2017

Install API Manager 2.0.0 features in DAS 3.1.0 Minimum HA Cluster

Introduction

Following are the steps to carry on in order to install API Manager Analytics features in minimum High Availability Data Analytics Server. Here we have used Oracle 11g as the RDBMS to create databases.

Steps to DAS Clustering,

During this blog post, I will explain how DAS server will be clustered with minimum HA deployment model.


1. Download Data Analytics Server 3.1.0 from here.

2. Create users for following datasources in oracle 11g.

  • WSO2CarbonDB (user -> carbondb)
  • WSO2REG_DB (user -> regdb)
  • WSO2UM_DB (user -> userdb)
  • WSO2_ANALYTICS_EVENT_STORE_DB (user -> eventstoredb)
  • WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB (user -> prosdatadb)
  • WSO2_METRICS_DB (user -> metricsdb)
  • WSO2ML_DB (user -> mldb)

Note- Please add the database driver (ex-ojdbc7.jar)  to <DAS_HOME>/repository/components/lib in both nodes

3. Add User management datasource in <DAS_HOME>/repository/conf/datasources/master-datasources.xml

     <datasource>
        <name>WSO2UM_DB</name>
        <description>The datasource used for user manager</description>
        <jndiConfig>
            <name>jdbc/WSO2UM_DB</name>
        </jndiConfig>
        <definition type="RDBMS">
            <configuration>
               <url>jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</url>
               <username>userdb</username>
               <password>userdb</password>
               <driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName>
           <maxActive>50</maxActive>
           <maxWait>60000</maxWait>
           <testOnBorrow>true</testOnBorrow>
           <validationQuery>SELECT 1</validationQuery>
           <defaultAutoCommit>false</defaultAutoCommit>
           <validationInterval>30000</validationInterval>
           </configuration>
         </definition>
     </datasource>

Note - oracle11g is the database where the users were created.

4. Add registry datasource in <DAS_HOME>/repository/conf/datasources/master-datasources.xml

<datasource>
    <name>WSO2REG_DB</name>
    <description>The datasource used by the registry</description>
    <jndiConfig>
    <name>jdbc/WSO2REG_DB</name>
        </jndiConfig>
        <definition type="RDBMS">
           <configuration>
            <url>jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</url>
            <username>regdb</username>
            <password>regdb</password>
<driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName>
           <maxActive>50</maxActive>
           <maxWait>60000</maxWait>
           <testOnBorrow>true</testOnBorrow>
           <validationQuery>SELECT 1</validationQuery>
           <defaultAutoCommit>false</defaultAutoCommit>
           <validationInterval>30000</validationInterval>
           </configuration>
         </definition>
     </datasource>

Note - oracle11g is the database where the users were created.

Change other datasources according to the created databases by changing url, username, password, driverClassName

5. Open the <DAS_HOME>/repository/conf/user-mgt.xml file and modify the dataSource property of the <configuration> element as follows

<configuration><Property name="dataSource">jdbc/WSO2UM_DB</Property>
</configuration>

6. Add the dataSource attribute of the <dbConfig name="govregistry"> in  <DAS_HOME>/repository/conf/registry.xml file. Make sure to keep the ‘wso2registry’ db config as it is.

<dbConfig name="govregistry">
    <dataSource>jdbc/WSO2REG_DB</dataSource>
</dbConfig>
<remoteInstance url="https://localhost:9443/registry">
    <id>gov</id>
    <cacheId>regdb@jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</cacheId>
    <dbConfig>govregistry</dbConfig>
    <readOnly>false</readOnly>
    <enableCache>true</enableCache>
    <registryRoot>/</registryRoot>
</remoteInstance>
<mount path="/_system/governance" overwrite="true">
    <instanceId>gov</instanceId>
    <targetPath>/_system/governance</targetPath>
</mount>
<mount path="/_system/config" overwrite="true">
    <instanceId>gov</instanceId>
    <targetPath>/_system/config</targetPath>
</mount>

7. Set following properties in <DAS_HOME>/repository/conf/axis2/axis2.xml file to enable Hazlecast clustering.

a) Enable clustering by setting value as ‘true’ for clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" as below,

<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">

b) Enable well known address by changing the membershipScheme to ‘wka’


<parameter name="membershipScheme">wka</parameter>

c) Add respective server IP address as the value for the localMemberHost property for each node

<parameter name="localMemberHost">10.100.1.89</parameter>

d) Change the localMemberPort by assigning an unique port. Both nodes should have different unique ports

<parameter name="localMemberPort">4000</parameter>

e) Add both the DAS nodes as well known addresses in the cluster by specifying under the <members> tag in each node as shown below.

  <members>
    <member>
        <hostName>10.100.1.89</hostName>
        <port>4000</port>
    </member>
    <member>
        <hostName>10.100.1.90</hostName>
        <port>4100</port>
    </member>
</members>

Note - Make sure to have different ports for both nodes and included under <members> tag in each node.

8. Enable HA mode in <DAS_HOME>/repository/conf/event-processor.xml in order to Cluster CEP.

<mode name="HA" enable="true">

9. Enter the respective server IP address under the HA mode Config for <hostname> in <eventSync> and <management> sections as below,

 <eventSync>
         <hostName>10.100.1.89</hostName>
 ..
  </eventSync>

     <management>
         <hostName>10.100.1.89</hostName>
  ..
</management>

10. Modify <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf as follows,

a) Keep carbon.spark.master as ’local’. This creates a spark cluster with hazelcast cluster.

b) Set ‘carbon.spark.master.count’ as 2 since both node works as master (active and passive)

carbon.spark.master local
carbon.spark.master.count 2

c) If the path to <DAS_HOME> is different in the two nodes, please do the following. If it same you can skip this step.

11. Create identical symbolic links to <DAS_HOME> in both nodes and ensures that we can use a common path. Uncomment and change carbon.das.symbolic.link accordingly by setting the symbolink link.

carbon.das.symbolic.link /home/ubuntu/das/das_symlink/

12. Make sure to apply above changes in both nodes and change ip address and ports (ex- localmemberport, port offset in carbon.xml, etc) accordingly.
Start at least one server with -Dsetup since we need to populate tables for the created databases and other node with/without -Dsetup . Go to <DAS_HOME>/bin and run

sh wso2server.sh -Dsetup
 
Steps to install APIM Analytics features

1. Go to management console -> Main - Configure -> Features

2. Click Repository Management and go to Add Repository.

3. Give a name and browse or add url to add the repository.

Note - You can get the p2 repo from here



Name - Any preferred name (ex - p2 repo)
Location (from URL) - http://product-dist.wso2.com/p2/carbon/releases/wilkes


4. Go to ‘Available Features’ tab, untick ‘Group features by category’ and click ‘Find Features’

5. Following features needed to be installed from listed set of features.



Tick the above features and click install and features will be installed.

Note - Make sure to do the same for both nodes.

Steps to Configure Statistic Datasource

Here, we only have to create stat database and point it in datasource file since we have followed other required steps when clustering.

1. Shut down both servers.

2. Create a database for statistics database in oracle (ex- user - statdb)

3. Go to <DAS_HOME>/repository/conf/datasources and open stats-datasources.xml and change the properties as below,

   <datasource>
      <name>WSO2AM_STATS_DB</name>
      <description>The datasource used for setting statistics to API Manager</description>
      <jndiConfig>
     <name>jdbc/WSO2AM_STATS_DB</name>
     </jndiConfig>
      <definition type="RDBMS">
     <configuration>
        <url>jdbc:mysql://10.100.15.22:1521/oracle11g</url>
        <username>statdb</username>
        <password>statdb</password>
        <driverClassName>com.mysql.jdbc.Driver</driverClassName>
       <maxActive>50</maxActive>
       <maxWait>60000</maxWait>
       <testOnBorrow>true</testOnBorrow>
       <validationQuery>SELECT 1</validationQuery>
       <validationInterval>30000</validationInterval>
     <defaultAutoCommit>false</defaultAutoCommit>
       </configuration>
     </definition>
   </datasource>

Related documentations


Sunday, July 3, 2016

Wants to monitor outgoing or incoming messages ? Let's Enable WireLogs

Have you ever faced a situation where you have to monitor outgoing or incoming message of your WSO2 product in order to understand its flow of request/response transmission ?

If the answer is YES, we can simply use wirelogs in order to monior incoming and outgoing messages just do a simple changes in configurations. In here I'm considering WSO2 API Manager product to describe on how to enable wirelogs.

1. First go to <APIM_HOME>/repository/conf and open log4j.properties file.
2. In here you just simple need to uncomment the following line.

log4j.logger.org.apache.synapse.transport.http.wire=DEBUG

3. That's it now save the file and start API Manager server ( Go to <APIM_HOME>/bin and enter sh wso2server.sh )
4. Assuming an API has been already deployed, now go to Store ( https://<ip-address>:9443/store)
5. Go to API console of an already available API and tryout an available resource.
6. Then go to terminal of API Manager.

You can see the wirelogs were enabled and the message transmission are separated as follows.

  • Logs with '>>' are the outgoing transmissions from API Manager
  • Logs with '<<' are the incoming transmissions to API Manager

[2016-07-01 22:07:33,493] DEBUG - wire >> "OPTIONS /pizzashack/1.0.0/menu HTTP/1.1[\r][\n]"
[2016-07-01 22:07:33,493] DEBUG - wire >> "Host: 172.16.2.23:8243[\r][\n]"
[2016-07-01 22:07:33,493] DEBUG - wire >> "Connection: keep-alive[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "Access-Control-Request-Method: GET[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "Origin: https://172.16.2.23:9443[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "Access-Control-Request-Headers: accept, authorization[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "Accept: */*[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "Referer: https://172.16.2.23:9443/store/apis/info?name=PizzaShackAPI&version=1.0.0&provider=admin&tenant=carbon.super[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "Accept-Encoding: gzip, deflate, sdch[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "Accept-Language: en-US,en;q=0.8[\r][\n]"
[2016-07-01 22:07:33,494] DEBUG - wire >> "[\r][\n]"
[2016-07-01 22:07:33,565] DEBUG - wire << "HTTP/1.1 200 OK[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Origin: https://172.16.2.23:9443[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Accept: */*[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Access-Control-Request-Method: GET[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Access-Control-Allow-Origin: *[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Access-Control-Allow-Methods: GET[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Access-Control-Request-Headers: accept, authorization[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Referer: https://172.16.2.23:9443/store/apis/info?name=PizzaShackAPI&version=1.0.0&provider=admin&tenant=carbon.super[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Host: 172.16.2.23:8243[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Accept-Encoding: gzip, deflate, sdch[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Accept-Language: en-US,en;q=0.8[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Access-Control-Allow-Headers: authorization,Access-Control-Allow-Origin,Content-Type,SOAPAction[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Date: Sun, 03 Jul 2016 16:37:33 GMT[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Transfer-Encoding: chunked[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "Connection: keep-alive[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "0[\r][\n]"
[2016-07-01 22:07:33,566] DEBUG - wire << "[\r][\n]"
[2016-07-01 22:07:33,570] DEBUG - wire >> "GET /pizzashack/1.0.0/menu HTTP/1.1[\r][\n]"
[2016-07-01 22:07:33,570] DEBUG - wire >> "Host: 172.16.2.23:8243[\r][\n]"
[2016-07-01 22:07:33,570] DEBUG - wire >> "Connection: keep-alive[\r][\n]"
[2016-07-01 22:07:33,570] DEBUG - wire >> "Accept: application/json[\r][\n]"
[2016-07-01 22:07:33,571] DEBUG - wire >> "Origin: https://172.16.2.23:9443[\r][\n]"
[2016-07-01 22:07:33,571] DEBUG - wire >> "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36[\r][\n]"
[2016-07-01 22:07:33,571] DEBUG - wire >> "Authorization: Bearer 3de15404-ffdb-3427-98da-65e6e43aaeaa[\r][\n]"
[2016-07-01 22:07:33,571] DEBUG - wire >> "Referer: https://172.16.2.23:9443/store/apis/info?name=PizzaShackAPI&version=1.0.0&provider=admin&tenant=carbon.super[\r][\n]"
[2016-07-01 22:07:33,571] DEBUG - wire >> "Accept-Encoding: gzip, deflate, sdch[\r][\n]"
[2016-07-01 22:07:33,571] DEBUG - wire >> "Accept-Language: en-US,en;q=0.8[\r][\n]"
[2016-07-01 22:07:33,571] DEBUG - wire >> "[\r][\n]"

Friday, June 3, 2016

Setup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using RDBMS

In this blog post I'll explain on how to configure RDBMS to publish APIM analytics using APIM analytics 2.0.0.

The purpose of having RDBMS is to fetch and store summarized data after the analyzing process. API Manager used this data to display on APIM side using dashboards.

Since the APIM 2.0.0, RDBMS use as the recommended way to publish statistics for API Manager. Hence, I will explain step by step configuration with RDBMS in order to view statistics in Publisher and Store through this blog post.

Steps to configure,

1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.

2. Go to carbon.xml ([APIM_ANALYTICS_HOME]/repository/conf/carbon.xml) and set port offset as 1 (default offset for APIM Analytics)

<Ports>
<!-- Ports offset. This entry will set the value of the ports defined below to
the define value + Offset.
e.g. Offset=2 and HTTPS port=9443 will set the effective HTTPS port to 9445
-->
<Offset>1</Offset>

Note - This is only necessary if both API Manager 2.0.0 and APIM Analytics servers run in a same machine.

3. Now add the data source for Statistics DB in stats-datasources.xml ([APIM_ANALYTICS_HOME]/repository/conf/datasources/stats-datasources./xml) according to the preferred RDBMS. You can use any RDBMS such as h2, mysql, oracle, postgres and etc and here I choose mysql to use in this blog post.


<datasource>
  <name>WSO2AM_STATS_DB</name>
  <description>The datasource used for setting statistics to API Manager</description>
  <jndiConfig>
    <name>jdbc/WSO2AM_STATS_DB</name>
    </jndiConfig>
  <definition type="RDBMS">
    <configuration>
      <url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
      <username>maneesha</username>
      <password>password</password>
      <driverClassName>com.mysql.jdbc.Driver</driverClassName>
      <maxActive>50</maxActive>
      <maxWait>60000</maxWait>
      <testOnBorrow>true</testOnBorrow>
      <validationQuery>SELECT 1</validationQuery>
      <validationInterval>30000</validationInterval>
      </configuration>
    </definition>
</datasource>

Give the correct hostname and name of the db in <url> (in this case, localhost and statdb respectively), username and password for the database and drive class name.

4. WSO2 analytics server automatically create the table structure for statistics database at the server start up using ‘-Dsetup’. 

5. Copy the related database driver into <APIM_ANALYTICS_HOME>/repository/components/lib directory.

If you use mysql - Download
If you use oracle 12c - Download
If you use Mssql - Download

6. Start the Analytics server

7. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

8. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. (by default the value set as false)

<Analytics>
        <!-- Enable Analytics for API Manager -->
        <Enabled>true</Enabled>

9. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

<DASServerURL>{tcp://localhost:7612}</DASServerURL>
<DASUsername>admin</DASUsername>
<DASPassword>admin</DASPassword>

Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analytics server runs on a different instance. 

By default, the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check {APIM_ANALYTICS_HOME}/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.

10. For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. To enable publishing using RDBMS, <StatsProviderImpl> should be uncommented (By default, it's not in as a comment. So this step can be omitted)

<!-- For APIM implemented Statistic client for DAS REST API -->
        <!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl-->
        <!-- For APIM implemented Statistic client for RDBMS -->
        <StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl>

11. The next step is to configure the statistics database in API Manager side. Add the data source for Statistics DB which used to configure in Analytics by opening master-datasources.xml ([APIM_HOME]/repository/conf/datasources/master-datasources./xml)


<datasource>
  <name>WSO2AM_STATS_DB</name>
  <description>The datasource used for setting statistics to API Manager</description>
  <jndiConfig>
    <name>jdbc/WSO2AM_STATS_DB</name>
    </jndiConfig>
  <definition type="RDBMS">
    <configuration>
      <url>jdbc:mysql://localhost:3306/statdb?autoReconnect=true&amp;relaxAutoCommit=true</url>
      <username>maneesha</username>
      <password>password</password>
      <driverClassName>com.mysql.jdbc.Driver</driverClassName>
      <maxActive>50</maxActive>
      <maxWait>60000</maxWait>
      <testOnBorrow>true</testOnBorrow>
      <validationQuery>SELECT 1</validationQuery>
      <validationInterval>30000</validationInterval>
      </configuration>
    </definition>
</datasource>

12. Copy the related database driver into <APIM_HOME>/repository/components/lib directory as well.

13. Start the API Manager server.

Go to statistics in publisher and the screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'


To view statistics, you have to create at least one API and invoke it in order to get some traffic to display in graphs.


Sunday, May 8, 2016

Setup WSO2 API Manager Analytics with WSO2 API Manager 2.0 using REST Client


Please Note - Statistics publishing using REST Client was deprecated from APIM 2.0.0. Please refer this to continue.

In this blog post I will explain how to configure WSO2 API Manager Analytics 2.0.0 with WSO2 API Manager 2.0 to publish and view statistics. Before going further into the topic, I thought to give a brief summary about the role of WSO2 API Manager Analytics 2.0.0 in here

WSO2 API manager embedded with the ability to view statistics of the operations carried out such as usage comparison, monitoring Throttled Out Requests, API last access time and so on. To view so, the user has to configure an analytics server with API Manager and it allows to view statistics based on the given criteria. Until WSO2 API Manager 2.0.0, the recommended analytics server to view statistics was WSO2 DAS ( Data Analytics Server ) which is a high performing enterprise data analytics platform. Before that WSO2 BAM (Business Activity Monitor) used to collect and analyze runtime statistics from the API Manager. Based on the WSO2 DAS, with the vision of having a separate but custom analytics package including new features that will perform all the analytics for API Manager, WSO2 API Manager Analytics has been introduced. WSO2 API Manager analytics fuses batch and real-time analytics with predictive analytics and generate alerts when an abnormal situation occurs via machine learning.

Hope now you have a sound knowledge on what API Manager analytics is all about. So let's starts with configuration.


Steps to configure,

1. First download the WSO2 API Manager Analytics 2.0.0 release pack and unzip it.
( Download )

2. Start the Analytics server (By default the port offset was given as 1 in carbon.xml)

3. Go to Management Console of Analytics Server and logged in as administrator (Username- admin, Password- admin). 

4. Go to Manager -> Carbon Applications. List and delete the existing org.wso2.carbon.analytics.apim carbon app.

5. Browse Rest Client car app (org_wso2_carbon_analytics_apim_REST-1.0.0.car) from [APIM_ANALYTICS_HOME]/statistics and upload.

That's it from APIM Analytics side. Now see how to configure API Manager to finalize the configurations.

6. Download the WSO2 API Manager 2.0.0 pack and unzip it ( Download )

7. Open api-manager.xml ([APIM_HOME]/repository/conf/api-manager.xml ) and enables the Analytics. The configuration should be look like this. ( by default the values set as false )

<Analytics> 
        <!-- Enable Analytics for API Manager --> 
        <Enabled>true</Enabled> 


8. Then configure Server URL of the analytics server used to collect statistics. The define format is ' protocol://hostname:port/'. Although admin credentials to login to the remote DAS server has to be configured like below.

<DASServerURL>{tcp://localhost:7612}</DASServerURL> 
<DASUsername>admin</DASUsername> 
<DASPassword>admin</DASPassword>


Assuming Analytics server in the same machine as the API Manager 2.0, the hostname I used here is 'localhost'. Change according to the hostname of remote location if the Analtics server run on different instance. By default the server port is adjusted with offset '1'. If the Analytics server has a different port offset ( check [APIM_ANALYTICS_HOME]/repository/conf/carbon.xml for the offset ), change the port in <DASServerURL> accordingly. As an example if the Analytics server has the port offset of 3, <DASServerURL> should be {tcp://localhost:7614}.


Now we have to choose between 2 clients to fetch and publish statistics.

  • The RDBMS client which fetches data from RDBMS and publish.
  • The REST client which directly fetches data from Analytics server.

I chose REST client to publish data in this tutorial and will explain how to configure the data fetching using RDBMS in next blog post.

For your information, API Manager 2.0 enables RDBMS configuration to proceed with statistics, by default. 

9. To enable publishing using REST Client, <StatsProviderImpl> should be uncommented (By default, it's in as a comment) and comment <StatsProviderImpl> for RDBMS

<!-- For APIM implemented Statistic client for DAS REST API -->
        <StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRestClientImpl</StatsProviderImpl>
        <!-- For APIM implemented Statistic client for RDBMS -->
        <!--StatsProviderImpl>org.wso2.carbon.apimgt.usage.client.impl.APIUsageStatisticsRdbmsClientImpl</StatsProviderImpl-->


10. Then the REST API url should be configured with hostname and port along with the credentials to access,

<DASRestApiURL>https://localhost:9444</DASRestApiURL> 
<DASRestApiUsername>admin</DASRestApiUsername> 
<DASRestApiPassword>admin</DASRestApiPassword>

As mentioned before, the port associate with the default offset of 1 for WSO2 APIM analytics 1.0.0.

11. Now Save api-manager.xml and start the API Manager 2.0 server.

That's it. Open publisher in a browser ( https://<ip-address>:<port>/publisher). Go to Statistics and select API Usage as an example. The screen should looks like this with a message of 'Data Publishing Enabled. Generate some traffic to see statistics.'




Just create few APIs and try to invoke them in order to get some traffic to generate statistics on graph. So you can see the statistics like this.







Thursday, April 28, 2016

Real time use cases of Deployment Synchronizer with a WSO2 ESB Cluster – Part 1

In this blog post I will explain some of the real time use cases of Deployment Synchronizer (DepSync) in a WSO2 ESB Cluster environment. If you want to know more about DepSync follow my previous post.

Before going further into the topic I'm going to list down the use cases so anyone can simply understand the possible scenarios happened here.

1. Create an artifact using Management console
2. Remove an artifact using Management console
3. Create and save an artifact in local repository of the manager node
4. Remove an artifact in local repository of the manager node
5. Create and save an artifact in local repository of a worker node
6. Remove an artifact in local repository of a worker node


Usecase 1 - Create an artifact using Management console

In here, I'm going to create a sample address endpoint and check how the synchronization process happened in manager and worker nodes.

1. Go to management console of the manager node, select Endpoints under Service Bus.

2. Then select 'Add Endpoint' by choosing 'address endpoint' as the type

3. Give a name and address to the Endpoint and press 'Save and Close'





Now observe the terminals of both manager and worker nodes.


Manager Node





Worker node





Once an artifact has been deployed in central repository (We use a SVN repository), manager node will send an hazlecast message to the all worker nodes indicating that the autocommiting of a change has been done. So worker nodes receive the same hazlecast message ( check messageId ) and the process of checking out (autocheckout) the changes from the central repository to its local repository happens.


Usecase 2 - Remove an artifact using Management console


Same process like the Usecase 1 happens here. Once an artifact has been removed from the repository, manager node will commit it in central repository and parallel send a message to worker nodes to checkout the changes. But check the below output of the worker node which shows the undeployment of the removed artifact from its local repository.


Worker Node





Usecase 3 - Create and save an artifact in local repository of the manager node


In this use case I'm going to save an endpoint manually in the local repository (<PRODUCT_HOME>/repository/deployment/server/synapse-configs/default) of the manager node.


Following is the sample configuration of a proxy which includes a send mediator


<proxy name="PassTroughTest" startonload="true" trace="disable" transports="http https" xmlns="http://ws.apache.org/ns/synapse">
<target>
<endpoint>
<address uri="http://10.100.7.11:9768/java_first_jaxws/services/hello_world">
</address>
</endpoint>
<outsequence>
<send>
</send></outsequence>
</target>
</proxy>


Go to proxy-services folder of the local repository and create an Xml file with the name 'SamplePassThrough' and save above configuration.

Now we have created a sample proxy service successfully. Let's check the terminals of both manager and worker nodes to understand what happened here.


Manager Node





Worker Node




As soon as the file created in the local repository of the manager node, it will auto commit the new configured artifact in central SVN repository with the name of the proxy service. Although the message communication happens after that and worker node receives the message, checking out from the Svn repo and deployed the proxy service in local repository of the each worker node.


I will explain other 3 use cases in my next post. If you have questions, don't hesitate to drop a comment.

Sunday, April 24, 2016

Use SVN Deployment Synchronizer to sync artifacts in multiple nodes of a WSO2 cluster

Deployment Synchronizer has the ability to sync deployed artifacts in a multiple nodes cluster as soon as the deployment happened. It's necessary to have the same configuration among each node of the cluster since the whole mechanism is to work as a single system (virtually). If a particular node doesn't have the same configuration like the rest of the nodes, it breaks the clustering model. As an example if a proxy service which doesn't include in Node 1, has been invoked through the load balancer and it won't fulfill the work division among nodes as it intended to be.

WSO2 products use SVN (SubVersioN) deployment synchronizer in order to have the syncing process of deployed artifacts like endpoints, proxy services, sequences and etc. The Subversion repository is used to sync the contents in the local sync directory (the axis2 repo directory, /repository/deployment/server, by default).

Now let's see how to setup the SVN DepSync repository with WSO2 products. Before going further into setup, first of it's necessary to select the suitable SVN version. It's advised to only select SVN version 1.7 or 1.8 for the products based on Carbon 4.4.x ( if you're using products based on Carbon 4.2.x and below, please use only SVN version 1.6 ).

Following files needed to be downloaded and save in the product environment to continue with SVN Dep Sync configuration.



  • Download SVNKit from here and save it into the <PRODUCT_HOME>/repository/components/dropins folder. 

After completing the pre-requesties, let's move to configure DepSync on manager and worker nodes. 


Deployment synchronizer configuration done in /repository/conf/carbon.xml file and go to <DeploymentSynchronizer> sub section in carbon.xml to make necessary changes.


Steps to Enable DepSync on the manager node

  • To enable the Deployment synchronizer, the value should be set to 'true' for <Enabled> tag 
  • Since manager node responsible for receiving server requests and committing changes in local repository to central repository, the value for <AutoCommit> should be 'true' 
  • <AutoCheckout> value should changed to 'true'. Once changes are committed in central repository, it should automatically checked out them to local repository of the node. 
  • We're using Subversion as repository type here :
    <RepositoryType>svn
    </RepositoryType>
  • Location of the SVN repository should be specify:
    <SvnUrl>https://svn.example.org/depSync
    </SvnUrl>
  • <SvnUsername> and <SvnPassword> must specified with the Username and password of SVN repository respectively. If the tenant specific configuration has to be made, set the value for to 'true'. 

That's it with the manager node configuration. So the manager node deployment synchronization configuration should looks like this,

<DeploymentSynchronizer>
<Enabled>true</Enabled>
<AutoCommit>true</AutoCommit>
<AutoCheckout>true</AutoCheckout>
<RepositoryType>svn</RepositoryType>
<SvnUrl>https://svn.example.com/depsync.repo/</SvnUrl>
<SvnUser>rep1</SvnUser>
<SvnPassword>reppass</SvnPassword>
<SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>

Now move to worker node configuration.


Steps to Enable DepSync on the worker nodes


Worker node has the same configuration for DepSync like the manager node except the usage of 
<AutoCommit>. Since worker node doesn't receive any server requests to be handled and the committing part isn't associated with worker nodes, <AutoCommit> value set to be 'false'. This is the DepSync configuration for Worker nodes.

<DeploymentSynchronizer>
<Enabled>true</Enabled>
<AutoCommit>false</AutoCommit>
<AutoCheckout>true</AutoCheckout>
<RepositoryType>svn</RepositoryType>
<SvnUrl>https://svn.example.com/depsync.repo/</SvnUrl>
<SvnUser>rep1</SvnUser>
<SvnPassword>reppass</SvnPassword>
<SvnUrlAppendTenantId>true</SvnUrlAppendTenantId>
</DeploymentSynchronizer>


So that's it about how to configure deployment synchronizer in a clustered environment. I will discuss about some of the use cases of DepSync in my upcoming posts.

Saturday, April 23, 2016

How to create a custom log for a proxy service in WSO2 ESB

Log files are vital to identifying any associated errors, security threats and sequence of service execution while using WSO2 carbon products. Usually application and other service logs are write into server logs but if you want to have a separate log for a particular service, you just need to simply add few lines to log4j.properties file. So with that in mind I will explained how to create a customized log file for a particular proxy service using wso2 Enterprise Service Bus.

First open log4j.properties file ( /repository/conf/log4j.properties ) and simply add the following entry.

log4j.category.SERVICE_LOGGER.TestLogProxy=INFO, PROXY_APPENDER
log4j.additivity.PROXY_APPENDER=false
log4j.appender.PROXY_APPENDER=org.apache.log4j.DailyRollingFileAppender
log4j.appender.PROXY_APPENDER.File=${carbon.home}/repository/logs/${instance.log}/testlogproxy${instance.log}.log
log4j.appender.PROXY_APPENDER.Append=true
log4j.appender.PROXY_APPENDER.layout=org.apache.log4j.PatternLayout
log4j.appender.PROXY_APPENDER.layout.ConversionPattern=%d{HH:mm:ss,SSS} [%X{ip}-%X{host}] [%t] %5p %c{1} %m%


In here, 'TestLogProxy' is the name of the proxy service. You can define a location as the value for 'log4j.appender.PROXY_APPENDER.File' property. I have used default path for logs ( ${carbon.home}/repository/logs/) in here.

Then create a proxy service with log mediator ( log category – Info and log level – Full ) and add a send mediator with a service endpoint. You can see the customized log file under specified log path and logs for the 'TestLogProxy' will be logged in this file.