Pattern-type (Part 13): How to create shared services for IBM SmartCloud Application Workload Service

3) Define and implement the service

In this part, we will create a service that is callable via REST API for that we will need to register a new OSGI service implementing the RegisterProvider interface. This interface defines a getRegistry() method and we will return the IP address as it is a key information for calling REST API defined on a server.
3.1) Define the OSGI files for the service provider
We will use again the ‘New’->’OSGI Service Component’ wizard provided by the PDK plugin but this time set the parameters as following:
name=serviceRegistryProvider
service type=Registry Provider
Implementation Class=com.itdove.iwd.lab.PoolManagerRegistryProviderImpl
and click ‘Finish’.

A new OSGI-INF file is created, the MANIFEST.MF is updated and a java class is created.

The minimum package that have to be included in the MANIFEST.MF are:

Import-Package: Import-Package: com.ibm.json.java,
 com.ibm.maestro.common.http,
 com.ibm.maestro.common.utils,
 com.ibm.maestro.model.transform,
 com.ibm.maestro.model.transform.template,
 com.ibm.websphere.ras,
 com.ibm.websphere.ras.annotation,
 com.ibm.ws.ffdc,
 com.ibm.maestro.iaas,
 org.apache.wink.client

3.2) Declare the service in the topology.
We have to declare the newly created serviceRegistryProvider in the topology as follow:

{
   "service-registry": [{
        "type": "serviceRegistryProvider"
    }],
    "vm-templates": [
        {
            "persistent": true,
            "name": "SharedService-PoolManager", 
            "roles": [
               {
                   "parms": {
                        "poolsize": "$attributes.poolsize",
                    },
                    "type": "PoolManager", 
                    "name": "PoolManager"
                }
            ], 
            "packages": [
                "POOLMANAGER_PKG"
            ]
        }
    ]
}

3.3) Implement the java class for the service.
You will the java implementation in the PDK sample under the iwd-pdk-workspace/plugin.com.ibm.sample.sharedservice.service/src/com/ibm/service/internal/SampleRegistryProvider.java. The only change you have to do is replace the ‘SharedService-Management’ by the actual name of our service ‘SharedService-PoolManager’.
3.4) Create the API methods.
Now, we have to implement the API methods, we will create 4 methods:
– Get the number of resources in the pool.
– Return true if a given threshold resource is reached.
– Pick a resource from the pool.
– Return a resource to the pool.
3.4.1) Define the API methods:
This is done by creating a file describing the different REST calls. We will put this file as a part of a node-parts.
So, using the PDK plugin ‘New’->’Plug-in Node Part’, enter ‘servicePoolManagerAPI’ as name, the nodeparts/servicePoolManagerAPI directory is created and along with the setup.py.
We create now a properties directory in which will place our API definition called ‘servicePoolManagerAPI.json’

The content is:

{ 
  "servicename":"poolmanager",
  "operations":[
   { "type" : "POST", "parms": [
      { "resource":"pool", "clientdeploymentexposure":true, "role":"SharedService-PoolManager.PoolManager", "script": "restapi.py pick", "timeout" : 60000 }
     ] 
   },
   { "type" : "DELETE", "parms": [
      { "resource":"pool", "clientdeploymentexposure":true, "role":"SharedService-PoolManager.PoolManager", "script": "restapi.py release", "timeout" : 60000 }
     ] 
   },
   { "type" : "GET", "parms" : [
      { "resource":"state", "clientdeploymentexposure":true, "role":"SharedService-PoolManager.PoolManager", "script": "restapi.py state", "timeout" : 60000},
      { "resource":"threshold", "clientdeploymentexposure":true, "role":"SharedService-PoolManager.PoolManager", "script": "restapi.py threshold", "pattern":"{arg1}" }
     ] 
   }
  ]
}

Now, we have to load this file at a given place on the server, this can be done by creating a script in the nodepars/servicePoolManagerAPI/common/install.

SCAWS SS PoolManager servicesampleapiThis script will contain:

import logging
import maestro
import os

logger = logging.getLogger("servicePoolManagerAPI/setup.py")

CONFIGDIR = maestro.sharedservice.getJSONCfgDir()
FILENAME = '/servicePoolManagerAPI.json'
DEST = CONFIGDIR + FILENAME
SRC = '../../properties' + FILENAME

os.popen('mkdir -p ' + CONFIGDIR)
os.popen("mv " + SRC + " " + DEST)

The nodepart must be added in the package for the PoolManager role, so we will modify the config.json accordingly.

...
   "packages": {
      "POOLMANAGER_PKG": [
         {
            "node-parts":[
                             {
                                 "node-part":"nodeparts/servicePoolManagerAPI.tgz"
                             }
                        ],
            "parts": [
               {
                  "part": "parts/poolmanager.scripts.tgz"
               }
            ]
         }
      ]
   }
...

3.4.2) Implement the API.

As you see above, each API will call the restapi.py and as the role is ‘SharedService-PoolManager.PoolManager’, the restapi.py must be created in the PoolManager role. The correct call will be done based on the method, resource and parameter of the request.

The restapi.py file will be created in the PoolManager role scripts next to the configure.py.

SCAWS SS PoolManager restapiand will contains the following code:

import maestro
import logging, os, sys, time
import gettext
import pickle

logger = logging.getLogger("PoolManager/restapi.py")

parms = maestro.operation['parms']
mode = maestro.operation['method']
filename = "%s/poolsizedata" % maestro.node['scriptdir']

def pick():
    global poolSize
    logger.debug('PoolSize before pick:%s' % poolSize)
    poolSize = poolSize - 1
    logger.debug('PoolSize after pick:%s' % poolSize)
    maestro.operation['return_value'] = "PoolSizeValue:%s" % poolSize

def release():
    global poolSize
    logger.debug('PoolSize before release:%s' % poolSize)
    poolSize = poolSize + 1
    logger.debug('PoolSize after release:%s' % poolSize)
    maestro.operation['return_value'] = "PoolSizeValue:%s" % poolSize

def state():
    global poolSize
    logger.debug('PoolSize at state:%s' % poolSize)
    maestro.operation['return_value'] = "PoolSizeValue:%s" % poolSize

def threshold():
    global poolSize
    logger.debug('PoolSize at threshold:%s = %s' % (parms, poolSize))
    threshold = parms['arg1']
    if poolSize < int(threshold):
        maestro.operation['return_value'] = 'Reached:%s' % poolSize
    else: 
        maestro.operation['return_value'] = 'Not Reached:%s' % poolSize
    logger.debug('Successfully threshold')

def run():
    if mode == "pick":
        pick()
    elif mode == "release":
        release()
    elif mode == "state":
        state()
    elif mode == "threshold":
        threshold()
    else: 
        maestro.operation['successful'] = False
        return_msg = "No valid method executed"
        maestro.operation['return_value'] = return_msg

def getPoolSize():
    input = open(filename,'rb')
    poolsizeVar = pickle.load(input)
    input.close()
    return poolsizeVar

def putPoolSize():
    global poolSize
    poolSizeVar['poolsize']=poolSize
    output = open(filename,'wb')
    pickle.dump(poolSizeVar,output)
    output.close()

poolSizeVar = getPoolSize()
poolSize = int(poolSizeVar['poolsize'])
if poolSize == 0:
   maestro.operation['successful'] = False
   maestro.operation['return_value'] = "Pool Empty"
else:
   run()
   putPoolSize()

PS: You can see that I don’t manage concurrency call, if 2 requests comes concurrently, the value of the poolSize will be uncertain.

As I persist the state of the pool in a file, I have to initialized the file and I will do it in the configure.py and thus the configure.py becomes:

import logging
import maestro
import pickle

logger = logging.getLogger("PoolManager/configure.py")

filename = "%s/poolsizedata" % maestro.node['scriptdir']
output = open(filename,'wb')
poolsize = maestro.parms['poolsize']
poolsizeVar = {'poolsize':poolsize}
pickle.dump(poolsizeVar, output)
output.close()

BTW: the maestro.node[‘scriptdir’] is maybe not the best place to save data but it is just for the demo.