Following are the steps to carry on in order to install API Manager Analytics features in minimum High Availability Data Analytics Server. Here we have used Oracle 11g as the RDBMS to create databases.
Steps to DAS Clustering,
During this blog post, I will explain how DAS server will be clustered with minimum HA deployment model.
1. Download Data Analytics Server 3.1.0 from here.
2. Create users for following datasources in oracle 11g.
- WSO2CarbonDB (user -> carbondb)
- WSO2REG_DB (user -> regdb)
- WSO2UM_DB (user -> userdb)
- WSO2_ANALYTICS_EVENT_STORE_DB (user -> eventstoredb)
- WSO2_ANALYTICS_PROCESSED_DATA_STORE_DB (user -> prosdatadb)
- WSO2_METRICS_DB (user -> metricsdb)
- WSO2ML_DB (user -> mldb)
Note- Please add the database driver (ex-ojdbc7.jar) to <DAS_HOME>/repository/components/lib in both nodes
3. Add User management datasource in <DAS_HOME>/repository/conf/datasources/master-datasources.xml
<datasource>
<name>WSO2UM_DB</name>
<description>The datasource used for user manager</description>
<jndiConfig>
<name>jdbc/WSO2UM_DB</name>
</jndiConfig>
<definition type="RDBMS">
<configuration>
<url>jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</url>
<username>userdb</username>
<password>userdb</password>
<driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName>
<maxActive>50</maxActive>
<maxWait>60000</maxWait>
<testOnBorrow>true</testOnBorrow>
<validationQuery>SELECT 1</validationQuery>
<defaultAutoCommit>false</defaultAutoCommit>
<validationInterval>30000</validationInterval>
</configuration>
</definition>
</datasource>
Note - oracle11g is the database where the users were created.
4. Add registry datasource in <DAS_HOME>/repository/conf/datasources/master-datasources.xml
<datasource> <name>WSO2REG_DB</name> <description>The datasource used by the registry</description> <jndiConfig> <name>jdbc/WSO2REG_DB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</url> <username>regdb</username> <password>regdb</password> <driverClassName>oracle.jdbc.driver.OracleDriver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <defaultAutoCommit>false</defaultAutoCommit> <validationInterval>30000</validationInterval> </configuration> </definition> </datasource>
Note - oracle11g is the database where the users were created.
Change other datasources according to the created databases by changing url, username, password, driverClassName
5. Open the <DAS_HOME>/repository/conf/user-mgt.xml file and modify the dataSource property of the <configuration> element as follows
<configuration> … <Property name="dataSource">jdbc/WSO2UM_DB</Property> </configuration>
6. Add the dataSource attribute of the <dbConfig name="govregistry"> in <DAS_HOME>/repository/conf/registry.xml file. Make sure to keep the ‘wso2registry’ db config as it is.
<dbConfig name="govregistry"> <dataSource>jdbc/WSO2REG_DB</dataSource> </dbConfig> <remoteInstance url="https://localhost:9443/registry"> <id>gov</id> <cacheId>regdb@jdbc:oracle:thin:@10.100.15.22:1521/oracle11g</cacheId> <dbConfig>govregistry</dbConfig> <readOnly>false</readOnly> <enableCache>true</enableCache> <registryRoot>/</registryRoot> </remoteInstance> <mount path="/_system/governance" overwrite="true"> <instanceId>gov</instanceId> <targetPath>/_system/governance</targetPath> </mount> <mount path="/_system/config" overwrite="true"> <instanceId>gov</instanceId> <targetPath>/_system/config</targetPath> </mount>
7. Set following properties in <DAS_HOME>/repository/conf/axis2/axis2.xml file to enable Hazlecast clustering.
a) Enable clustering by setting value as ‘true’ for clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" as below,
<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
b) Enable well known address by changing the membershipScheme to ‘wka’
<parameter name="membershipScheme">wka</parameter>
c) Add respective server IP address as the value for the localMemberHost property for each node
<parameter name="localMemberHost">10.100.1.89</parameter>
d) Change the localMemberPort by assigning an unique port. Both nodes should have different unique ports
<parameter name="localMemberPort">4000</parameter>
e) Add both the DAS nodes as well known addresses in the cluster by specifying under the <members> tag in each node as shown below.
<members> <member> <hostName>10.100.1.89</hostName> <port>4000</port> </member> <member> <hostName>10.100.1.90</hostName> <port>4100</port> </member> </members>
Note - Make sure to have different ports for both nodes and included under <members> tag in each node.
8. Enable HA mode in <DAS_HOME>/repository/conf/event-processor.xml in order to Cluster CEP.
<mode name="HA" enable="true">
9. Enter the respective server IP address under the HA mode Config for <hostname> in <eventSync> and <management> sections as below,
<eventSync> <hostName>10.100.1.89</hostName> .. </eventSync> <management> <hostName>10.100.1.89</hostName> .. </management>
10. Modify <DAS_HOME>/repository/conf/analytics/spark/spark-defaults.conf as follows,
a) Keep carbon.spark.master as ’local’. This creates a spark cluster with hazelcast cluster.
b) Set ‘carbon.spark.master.count’ as 2 since both node works as master (active and passive)
carbon.spark.master local carbon.spark.master.count 2
c) If the path to <DAS_HOME> is different in the two nodes, please do the following. If it same you can skip this step.
11. Create identical symbolic links to <DAS_HOME> in both nodes and ensures that we can use a common path. Uncomment and change carbon.das.symbolic.link accordingly by setting the symbolink link.
carbon.das.symbolic.link /home/ubuntu/das/das_symlink/
12. Make sure to apply above changes in both nodes and change ip address and ports (ex- localmemberport, port offset in carbon.xml, etc) accordingly.
Start at least one server with -Dsetup since we need to populate tables for the created databases and other node with/without -Dsetup . Go to <DAS_HOME>/bin and run
sh wso2server.sh -Dsetup
Steps to install APIM Analytics features
1. Go to management console -> Main - Configure -> Features
2. Click Repository Management and go to Add Repository.
3. Give a name and browse or add url to add the repository.
Note - You can get the p2 repo from here
Name - Any preferred name (ex - p2 repo)
Location (from URL) - http://product-dist.wso2.com/p2/carbon/releases/wilkes
4. Go to ‘Available Features’ tab, untick ‘Group features by category’ and click ‘Find Features’
5. Following features needed to be installed from listed set of features.
Tick the above features and click install and features will be installed.
Note - Make sure to do the same for both nodes.
Steps to Configure Statistic Datasource
Here, we only have to create stat database and point it in datasource file since we have followed other required steps when clustering.
1. Shut down both servers.
2. Create a database for statistics database in oracle (ex- user - statdb)
3. Go to <DAS_HOME>/repository/conf/datasources and open stats-datasources.xml and change the properties as below,
<datasource> <name>WSO2AM_STATS_DB</name> <description>The datasource used for setting statistics to API Manager</description> <jndiConfig> <name>jdbc/WSO2AM_STATS_DB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://10.100.15.22:1521/oracle11g</url> <username>statdb</username> <password>statdb</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource>
Related documentations
- https://docs.wso2.com/display/CLUSTER44x/Minimum+High+Availability+Deployment+-+DAS+3.1.0
- https://docs.wso2.com/display/DAS310/Setting+up+Oracle
- https://docs.wso2.com/display/AM200/Installing+WSO2+APIM+Analytics+Features
- https://docs.wso2.com/display/AM200/Configuring+APIM+Analytics (Go to ‘Standard Setup’ section)