Pattern-type (Part 4): Adding your own monitoring collector to a pattern-type with SmartCloud Application Services

In the previous articles, I explained own to creates pattern-type for a single server, a master-slave and also how to add static scalability to a pattern-type. Now, if we want to have a dynamic scalability, a scalability which react on a monitoring feature, we have to be able to collect information from the server and this exactly what I will explain in this article.

But first I would like to stress the difference between pattern-type and pattern. A pattern-type provides all components, links, scalability rules… to build a pattern. The best example is the pattern-type for Web-Application (see demo) which provides Enterprise Application, Database components and scalability rules to design your own pattern which you will be able to deploy multiple time. So in this article we show how to create a pattern-type which requires more effort than create a pattern based on a pattern-type. ISV and/or company with home made application which not fit one of the existing pattern-type would be interested in this as pattern-type allows to put all the infrastructure intelligence in the pattern-type in order to deliver the best pattern for your customer.

The final goal will be to be able to monitor the number of files in a given directory of a server and have a graph shown in the ‘Manage’->’Monitoring’->’Middleware Monitoring’.

This have to be done in multiple steps:

1) Defining the format of the data that have to be collected in a metadata file.

2) Create the collector script which will collect the information on the server.

3) Create scripts which will upload the metadata file and the collector script on the server.

4) Register the collector.

5) Provide configuration to the UI to show a graph representation of the collected data.

Metadata file:

We have to create a metadata file which will specify the format of the data send by the collector. This metadata file will be used by the IWD agent to parse the collected data.

{
	"version" : 1,
	"category": [ "files"],
	"update_interval": 120,
	"metadata":[
		{
		"files":{
			"metrics":[
				{
				"attribute_name":"nbFiles",
				"metrics_name": "nbFiles",
				"metric_type": "COUNTER"
				}
			   ]
	  	     }
		}
         ]
}

Because a metadata file can be used for multiple category we need to specify in the ‘category’ element the list of the category for which this metadata file will be used and for each category we have to define the metrics. In our case, the collector response will have to contain the attribute ‘nbFiles’ defined by the ‘attribute_name’. The ‘metrics_name’ will be the name exposed by the API and will have to be use when we want to reference this metric. We defined also the metric_type, here COUNTER.

We will place this file in the /plugin/files/collectors directory of our plugin.

Collector Script

IWD offers different possibility to collect information from server. This can be done via HTTP, WCA and others. One of them is scripts and this is the one I selected as it will be the easiest one to understand. You can also implement your own java class by implementing the ICollector interface. As all collector have to follow this interface, the response of the collector have to follow a given format, so our script, here called ‘files.sh’ and place next to the metadata file is:

nbFiles=`ls $1 | wc -l`
resp="{\"version\":1,\"category\":[\"files\"],\"content\":[{\"files\":{\"nbFiles\":$nbFiles}}]}"
echo $resp

the response will look like:

{"version":1,"category":["files"],"content":[{"files":{"nbFiles":2}}]}

Upload the metadata and script files on the server:

Now, during the server installation we have to upload these files on the server and this is done by adapting the ‘install.py’ file of the role.

import maestro
import logging;
import urlparse;
import os;
import inspect;

logger = logging.getLogger("install.py")
logger.debug("Install Server Part");

# Prepare (chmod +x, dos2unix) and copy scripts to the agent scriptdir
maestro.install_scripts('scripts')

# download collector script
installerUrl = urlparse.urljoin(maestro.parturl, '../files/collectors/files.sh')
filesScript = os.path.join('/home/idcuser/collector',"files.sh");
maestro.download(installerUrl, filesScript);
# convert collector script
rc = maestro.trace_call(logger, ['dos2unix', filesScript])
maestro.check_status(rc, 'Failed to convert files.sh')
# chmod collector script.
rc = maestro.trace_call(logger, ['chmod', '+x', filesScript])
maestro.check_status(rc, 'Failed to chmod files.sh')
# mkdir for the collector.
rc = maestro.trace_call(logger, ['mkdir', '/home/idcuser/files'])
maestro.check_status(rc, 'Failed to create directory /home/idcuser/files')

# download collector script
installerUrl = urlparse.urljoin(maestro.parturl, '../files/collectors/Server-meta.json')
metadata = os.path.join('/home/idcuser/collector',"Server-meta.json");
maestro.download(installerUrl, metadata);
# convert collector script
rc = maestro.trace_call(logger, ['dos2unix', metadata])
maestro.check_status(rc, 'Failed to convert Server-meta.json')

First, we upload the files.sh in the /home/idcuser/collector directory, convert it in a unix format and make it executable.

Secondly, we create the /home/idcuser/files, this is the directory to be monitored.

Then, we upload the metadata file (Server-meta.json) and convert it in a unix format.

This is one way to implement this but of course there is others, for example we could upload a single files that makes the chmod, mkdir…

Register the collector:

We have to register the collector and this is done via the command ‘maestro.monitorAgent.register(). We will do that in the configure.py file of the role.

nodeName=maestro.node['name'];
roleName=nodeName + '.' + maestro.role['name'];
logger.debug("Node:%s" % nodeName);
logger.debug("Role:%s" % roleName);
maestro.monitorAgent.register('{\
		"node":"%s",\
		"role":"%s",\
		"collector":"com.ibm.maestro.monitor.collector.script",\
		"config":{\
			"metafile":"/home/idcuser/collector/Server-meta.json",\
			"executable":"/home/idcuser/collector/files.sh",\
			"arguments":"/home/idcuser/files",\
			"validRC":"0",\
			"workdir":"/tmp",\
			"timeout":"5"}}' % (nodeName,roleName));

It is important that the ‘role’ here is the concatenation of the node name + the role name (ie:Server-Server-12343454688.Server). The collector is the interface name that will handle the collector, here we choose the ‘script’ interface name. We have to specify the location of the metadata file, the location of the collector itself, the arguments for this collector (here the directory we would like to monitor), the valid exit code, a working directory and a timeout.

Configure the UI:

Now, we have to create a monitoring_ui.json to show the graph and place this file next to the config.json file.

Our monitoring_ui.json file is:

[
    {
	"version": 2,
	"category": "files",
	"label": "Files",
        "displayRoles": ["server"],
	"displays": [
		{
		"label": "File Count",
		"monitorType": "HistoricalNumber",
		"chartType": "Lines",
		"metrics": [
			{
			"attributeName": "nbFiles",
			"label": "Nb",
			}
		   ]
		}
           ]
	}
]

This generates, after the deployment of a virtual application pattern based on this pattern-type, a graph in the ‘Middleware monitoring’ of the server which is updated periodically with the number of files located in the monitored directory.

Conclusion:

You can see, it is possible to add your own monitoring capabilities in a pattern-type with this we are on the right track for creating a plugin with an auto-scaling feature which will react on the number of files in a specific directory.

References:

IWD 3.1 InfoCenter