How WSO2 Stream processor/ Streaming integrator based Analytics works and the initial steps to debug the issues.

APIM analytics 2.6.0, EI Analytics 6.4.0, 6.5.0, 6.6.0 are based on the WSO2 stream processor and APIM analytics 3.0.0, APIM analytics 3.1.0, APIM analytics 3.2.0 are based on the streaming integrator with some customization.

WSO2 products (APIM, EI) will publish the events to the Analytics node using thrift transport (TCP protocol). Please refer the documentation [1] for detailed information about analytics-related thrift transport and the Thrift Transport Thread Model.
Also, there will be a queue in the APIM/EI side to keep the events before sending the events to the analytics side. As well as on the Analytics side also there is a queue to keep received events.

These received events will be processed by the analytics node as per the logic defined in the siddhi application and it will be inserted into the database. Those data will be retrieved from the database and will be shown in the widgets or in the APIM publisher UI.

In the API manager, the following handlers are responsible to publish the events to the analytics side
APIMgtLatencyStatsHandler:- Measure request and response latencies.
APIMgtUsageHandler:- Publishes events to (Analytics node) for collection and analysis of statistics Publish request data.
APIMgtResponseHandler:- The data publish initialization happens through APIMgtResponseHandler. It will publish events upon success API invocations.

So in a situation like events are not published to the APIM analytics side we can enable the debug logs to the class on the APIM side. It will print the logs as below.

DEBUG {} — Publishing success API invocation event from gateway to analytics for: /test/system/usecase/v1 with ID: 42ea8942-ed63–4e20-b408-fbff206cdac8 started at [2020.01.16 12:03:12,068 EST] {}

Also in the APIM analytics, to make sure that the events are receiving from the API Manager, we can add a sink log to the related stream in the APIM_EVENT_RECEIVER.siddhi file which is located at <Analytics_Home>/wso2/worker/deployment/siddhi-files directory.

For example, to add a log to the request stream we can add the @sink(type=”log”) as below,

@sink(type=’log’, prefix=”EVENT-LOGGER”)
define stream InComingRequestPrintStream (meta_clientType string,
applicationConsumerKey string,
applicationName string,
applicationId string,
applicationOwner string,
apiContext string,
apiName string,
apiVersion string,
apiResourcePath string,
apiResourceTemplate string,
apiMethod string,
apiCreator string,
apiCreatorTenantDomain string,
apiTier string,
apiHostname string,
username string,
userTenantDomain string,
userIp string,
userAgent string,
requestTimestamp long,
throttledOut bool,
responseTime long,
serviceTime long,
backendTime long,
responseCacheHit bool,
responseSize long,
protocol string,
responseCode int,
destination string,
securityLatency long,
throttlingLatency long,
requestMedLat long,
responseMedLat long,
backendLatency long,
otherLatency long,
gatewayType string,
label string);

Once we added the sink log, the received events will be printed in the carbon.log file as below.

[2020–08–13 13:49:40,015] INFO {} — EVENT-LOGGER : Event{timestamp=1597344580003, data=[{“correlationID”:”83f01a9c-8ba6–45bf-891e-0e8af92d1295",”keyType”:”PRODUCTION”}, ikbXoJZIg71qiiv6Ido0, test, 14, wso2apimnager, /test/1.0, test, 1.0, /, /*, POST, wso2apimgr, carbon.super, Unlimited,, l1d-live-test@TEST.ORG,,, Apache-HttpClient/4.1.1 (java 1.5), 1597344579874, false, 129, 3, 126, false, 8810, https — 1, 200,, 0, 0, 0, 0, 126, 0, SYNAPSE, Synapse], isExpired=false}

If the log is not printed on the analytics side, We can conclude that the event has not been received to the Analytics side and we can check the following concerns.
i. Connectivity between APIM and Analytics using telnet.
ii. Add the debug log to the APIMgtResponseHandler on APIM as mentioned above and see whether the events are published
iii. Check the log files of the gateways
iv. Check the handlers in the synapse configurations of the API on the APIM side.

If you are able to see the received events after adding the above sink log and if there are any issues on the analytics side, normally we can see the related errors in the log file of the analytics node.
So we can get the analytics log file and can get an idea about the issue. You can find the log files in the <Analytics_home>/wso2/ worker/logs directory.

Apart from the above, during the investigation, we can check the database whether the received events are persisted or not.
In the database, there are aggregation tables with different time frames (SECONDS, MINUTES, HOURS, DAYS, MONTHS, YEARS).
Normally received events will be persisted in the related “_SECONDS” table of the APIM_Analytics_DB. Then it will be aggregated to the higher tables as per the time frame.
The default retention period of those tables in the APIM Analytics is

_SECONDS table:- 5 minutes
_MINUTES table:- 72 hours
_HOURS table:- 90 days
_DAYS table:- 1 year
_MONTHS table:- 10 years

So we can check the related SECONDS, MINUTES tables whether there are recently received events.
For example, to check the events related to API usage information, we need to check ApiUserPerAppAgg_SECONDS and ApiVersionPerAppAgg_SECONDS tables.
If the events which are recently received by the analytics node is not there, We may need to check the siddhi files which are located at the <Analytics_Home>/wso2/worker/deployment/siddhi-files directory since events are received, processed, and insert into the DB through the siddhi files.

Then those stats will be retrieved from the aggregation tables and will be shown in the publisher UI (in APIM 2.6.0) or in Dashboards(APIM 3.0.0 or above). Normally the store query (rest API request)will be used to retrieve the data from the database.

In a situation like when the stats not showing in the publisher UI (in AM2.6.0), we need to check the store query which is related to the specific widget.
For example
The store query related to the API Usage widget

{“appName”:”APIM_ACCESS_SUMMARY”,”query”:”from ApiUserPerAppAgg on apiCreatorTenantDomain==’carbon.super’ within 1566498600000L, 1569220702000L per ‘days’ select apiName, apiVersion, apiCreator, username, sum(totalRequestCount) as total_request_count, apiContext group by apiName, apiVersion, username, apiCreator, apiContext order by total_request_count DESC;”}

The store query related to the API Latency widget

{“appName”:”APIM_ACCESS_SUMMARY”,”query”:”from ApiExeTime on(apiName==’aaa’ AND apiVersion==’1.0.0' AND apiCreatorTenantDomain==’carbon.super’) within 1569136316000L, 1569222716000L per ‘hours’ select apiName, apiContext, apiCreator, apiVersion, AGG_TIMESTAMP, responseTime, securityLatency, throttlingLatency, requestMedLat, responseMedLat, backendLatency, otherLatency;”}

Likewise, the stats will be loaded to the widgets using store queries and each widget will have separate store queries as per its use case.
Please refer the documentation [2], [3] for detailed information about the store queries.
In APIM 2.6.0 we can get the exact store query of the related widgets by enabling the debug log for the following class in the APIM publisher. Once we enable the debug logs related store query will be printed in the API manager log file.


But in the API manager Analytics 3.0.0 and above, the queries are defined in the widgetConf.json file which is located at the <Analytics_Home>/wso2am-analytics-3.0.0/wso2/dashboard/deployment/web-ui-apps/analytics-dashboard/extensions/widgets/<specific_widget>/ directory.

In APIM analytics the data will be retrieved from the Analytics_DB and the AM DB to show the stats in the dashboards. To achieve this, the Siddhi Store Data Provider will be used to retrieve the analytics data and RDBMSStreamingDataProvider will be used to retrieve the data from APIM related data from AM_DB.

Siddhi Store Data Provider:- This data provider runs a siddhi store query.
This data provider queries dynamic tables.

Please refer to the documentation [6] for detailed information about these data providers.

During an investigation, in the APIM analytics 3.0.0 dashboard, we can generate a HAR file and can capture these queries. Then we can check the network trace to see the requests and the response. By investigating those queries of the widgets we can continue the investigation.
To generate a HAR file please refer the documentation [4].

Further, we can execute those store queries directly against the APIM analytics worker and we can check whether it is providing the results. As per the result, we can continue our investigation.
Like the same, we can analyze the AM_DB-related RDBMS queries and can execute them directly in the AM_DB to continue the investigation if there are any issues related to that.

In API Manager Analytics 3.0.0 and above, apart from the above queries, we need to check the dashboard logs whether there are any errors related to the issue.
Also, we need to check the “auth.configs” in the deployment.yaml file of the dashboard profile, since there is a possibility for the analytics dashboard will not be loaded if there are any issues in the IDP configurations. Also, we need to make sure that the APIM_ANALYTICS_DB is properly shared between the dashboard profile and the worker profile.


Written by

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store