If APIM and DAS run on the same machine, increase the default service port of DAS by setting offset value in <DAS_HOME>/repository/conf/carbon.xml
<Offset>1</Offset>
Define the Datasource declaration according to your RDBMS in <DAS_HOME>/repository/conf/datasources/master-datasources.xml. This DB is used to push the summarised data, after analyzing is done by DAS. Later APIM uses this DB to fetch the summary data and display on APIM dashboard. Here we used the MySQL databases as an example. But you can configure it with H2, Oracle etc. Note that you should use the WSO2AM_STATS_DB as the Datasource name always.
Also note that, Auto Commit option should be disable when working with DAS. You can set this in the JDBC URL or adding line <defaultAutoCommit>false</defaultAutoCommit> to datasource <configuration> tag.
<datasource> <name>WSO2AM_STATS_DB</name> <description>The datasource used for setting statistics to API Manager</description> <jndiConfig> <name>jdbc/WSO2AM_STATS_DB</name> </jndiConfig> <definition type="RDBMS"> <configuration> <url>jdbc:mysql://localhost:3306/TestStatsDB</url> <username>db_username</username> <password>db_password</password> <driverClassName>com.mysql.jdbc.Driver</driverClassName> <maxActive>50</maxActive> <maxWait>60000</maxWait> <testOnBorrow>true</testOnBorrow> <validationQuery>SELECT 1</validationQuery> <validationInterval>30000</validationInterval> <defaultAutoCommit>false</defaultAutoCommit> </configuration> </definition> </datasource>
If you are used MySQL as the database, download and paste MySQL driver from here
to <DAS_HOME>/repository/components/lib. Like earlier, APIM stat publishing and analyzing using BAM, DAS does not create the table structure in the database automatically and have to do it manually. Thus find the correct schema declaration script under dbscript folder and import it to the above database.ex: use the mysql.sql to create schemas in the above DB
DAS uses the SparkSQL to Analyse data. All the definition about the published data from the APIM and the way it should analyze using spark, are ship to DAS as a .car file.
Note that: If you are using the MySQL copy and paste the MySQL driver library to <AM_HOME>/repository/components/lib
DAS configuration overview
Let's invoke an API to generate Traffic and see the Statistics
Deploy Sample Wheather API
Deploy sample WeatherAPI by login to the APIM Publisher
Sample Wheather API
Then login to the Store and subscribe to API you created
Using Store API console or using Curl invoke the API
Invoke the API
Then wait for a few minutes(‹ 5 mins) to generate Analytics
Then Navigate to Publisher Statistics Section and click on API Usage
WeatherAPI usage
Data purge is one option to remove historical data in DAS. Since DAS does not allow to delete the DAS table data or Table deletion this option is very important. With data purging, you can achieve high performance on data analyzing without removing analyzed summary data. Here we purge data only on stream data fired by APIM. These data are contained in the following tables.
ORG_WSO2_APIMGT_STATISTICS_DESTINATION ORG_WSO2_APIMGT_STATISTICS_FAULT ORG_WSO2_APIMGT_STATISTICS_REQUEST ORG_WSO2_APIMGT_STATISTICS_RESPONSE ORG_WSO2_APIMGT_STATISTICS_WORKFLOW ORG_WSO2_APIMGT_STATISTICS_THROTTLE
Make sure not to purge data other than the above table. it will result in vanishing your summarized historical data. There are two ways to purge data in DAS.
Data Purge Dialog box
Note that this will affect all the tenants
<analytics-data-purging> <!-- Below entry will indicate purging is enable or not. If user wants to enable data purging for cluster then this property need to be enable in all nodes --> <purging-enable>true</purging-enable> <cron-expression>0 0 12 * * ?</cron-expression> <!-- Tables that need include to purging. Use regex expression to specify the table name that need include to purging.--> <purge-include-table-patterns> <table>.*</table> <!--<table>.*jmx.*</table>--> </purge-include-table-patterns> <!-- All records that insert before the specified retention time will be eligible to purge --> <data-retention-days>365</data-retention-days> </analytics-data-purging>
Add Comment
Comments (0)