Transactional Infrastructure – Next generation cloud native architecture to enable innovative pricing models for SaaS products

The advances in PaaS and APaaS offerings from all the major cloud services providers have enabled rapid adoption of the microservices architecture paradigm by SaaS vendors big and small. Furthermore, application of Domain Driven Design methodology has ensured that microservices are well aligned with business models decomposed in to well defined bounded contexts. This has also allowed SaaS vendors and customers alike to assemble complimentary services from different providers to be integrated to support new business models and/or value added services. However, most vendors still continue to base their pricing strategies on a standard multi-tiered per user subscription model.

The advances in per use pricing models based on the serverless deployment mechanisms being provided by the cloud providers (AWS Lambda, Azure Functions, etc.) are certainly helping SaaS vendors to reduce their fixed infrastructure costs and pass on the benefits to the customers; but the complexities involved in tracking per subscriber use of shared resources even at the function level are formidable enough to tackle without some further tweaks to the serverless deployment options for microservices. These complexities further increase significantly when you consider deployments wherein multiple microservices are deployed in to shared containers managed through platforms such as Kubernetes or APaaS offerings such as Pivotal Cloud Foundry.

To address these complexities we introduce this concept of “Transactional Infrastructure”. It allows for the well understood best practices in designing multi-tenancy systems to be incorporated in to the architecture of cloud native applications based on microservices. We shall illustrate this concept in the context of a hypothetical BIaaS product offering hosted in AWS.

BIaaS System Architecture

The figure above illustrates the typical components that would be involved in a BIaaS product offering. For the sake of this article we will ignore accessibility challenges with the source data systems by assuming that a simple VPN tunnel is sufficient to gain pull access to the source data stores. A push based mechanism would need a software appliance to be deployed behind the corporate firewall and as such the computing resources required for the same would be the sole responsibility of the customer.

For the purpose of this article, we define a transaction as a distinct outcome that would deliver tangible value to a business user who would subscribe to this service. Consequently, we shall ignore exploratory work associated with the initial configuration work required to onboard a new customer on to the service which would be charged to the customer as a one-time on boarding fee. We can also possibly ignore subsequent configuration changes and also ad-hoc queries performed by the customer although the later could still be brought within the ambit of a transaction. Thus, a transaction would be a repeatable process that would be executed either at scheduled times or on-demand by any authorised user of the subscribing organisation or an individual. 

A typical implementation of such a system using microservices deployed as Lambda functions on AWS might be as depicted in the figure below.

BIaaS on AWS with SWF

The use of the AWS Simple Workflow Service (SWF) is used to indicate an imperative style of programming implemented to support asynchronous loosely coupled execution. AWS RedShift is proposed as the OLAP data store although other offerings from AWS such as DynamoDB or Relational Data Store (RDS) can also be considered. The AWS Secured Simple Storage Service (Secure S3) is proposed to address security concerns associated with a multi-tenancy BLOB store. It is conceivable that instead of S3 the extracted data could directly be uploaded to a staging area in RedShift for further processing. The auxiliary AWS services such as IAM, Cognito, CloudWatch, SNS, and SES are proposed to address other common multi-tenancy concerns and notification requirements.

While this architecture does support asynchronous loosely coupled execution, the imperative style of implementing business logic favours direct synchronous calls amongst the various microservices with the workflow engine providing the overall orchestration function.

Alternatively, a truly asynchronous execution model can be implemented by adopting an event driven streams based architecture by replacing the SWF service with a queue management service such as AWS Simple Queueing Service (SQS) or Kafka (https://kafka.apache.org/). This is illustrated in the figure below.

BIaaS on AWS with SQS

A detailed discussion around the use of event driven architecture and reactive programming techniques is beyond the scope of this article and interested parties are requested to read other articles on the topic, some of which are listed below:

https://www.infoq.com/news/2017/11/event-sourcing-microservices
https://www.infoq.com/news/2015/06/ddd-events-microservices
https://www.infoq.com/presentations/cloud-native-kafka-netflix
https://www.infoq.com/presentations/reactive-ddd-distributed-systems

As would be apparent from these articles, the event driven implementation architecture and reactive programming techniques are the solution of choice to optimise resource utilisation and as we shall further see are more suitable for incorporating instrumentation to enable tracking of per transaction resource usage. Hence, in the rest of this article we shall focus on the event driven architecture although most of the techniques would apply equally to the former.

A trivial approach to accommodating multi-tenancy in a microservices based architecture would be to create a complete deployment stack image and simply deploy the same across different AWS accounts each dedicated to a single subscribing user and/or organisation. The AWS CloudFormation, OpsWorks and CodeDeploy services can be leveraged appropriately to support this deployment strategy across computing resources available as EC2 instances, Elastic Container Service (ECS), and Lambda functions combined with various storage and other services. The resource consumption can then be easily tracked at a per subscriber level and can be billed at cost plus managed services overheads. However, this will require certain fixed capacity to be reserved for each subscriber which cannot be leveraged for other subscribers. Thus this strategy is not sufficient to serve as a market differentiator for a SaaS vendor.

On the other hand, if all the components are deployed within a single AWS account all resources can be optimally leveraged across all available load at any given time and thus will help the SaaS vendor to minimise their infrastructure costs. However, no suitable instrumentation services are available at the moment to help the vendor track resource utilisation even at a per subscriber level let alone at a per transaction level. Services such as AWS CloudWatch provide instrumentation at a very coarse level and other fine grained monitoring services such as AWS X-Ray or Zipkin (https://zipkin.io/) are primarily distributed tracing and performance monitoring mechanisms that are not equipped to handle transaction context and/or resource utilisation.

The challenges outlined above can be addressed by incorporating the transaction context as a first degree concern within the microservices architecture and by extending the deployment automation mechanisms by incorporating them in to add on SaaS vendor specific deployment services using low level Infrastructure-as-Code APIs exposed by all the cloud services providers. These vendor specific deployment services will automatically inject the subscriber and transaction context in to the deployment metadata so that every invocation of the business services will be able to capture this context within the event data as well as trace logs. A “transaction analysis” service can then be implemented to scan and analyse the events being generated and processed by the business services to determine the resource utilisation and thus the costs associated with each subscriber account and/or the specific transactions of interest. The resulting data can then be passed along to accounting systems to assist with customer invoicing and reconciliation taking in to account specific SLAs and usage tier discounts that the customer may have signed up for. The figure below illustrates the resulting “transactional infrastructure” architecture.

BIaaS on AWS with Trans Infra

As a part of the customer on boarding process the appropriate Deployment functions are invoked for creating subscriber specific Secure S3 buckets and other appropriate “Subscription” context incorporating user identities federated using AWS Cognito. When a transaction is initiated by a specific user this subscription context is utilised appropriately, e.g. for an extract transaction the subscription context will be utilised to choose the appropriate S3 bucket in to which the data is to be staged. Subsequently, all events generated will be enriched by the Event Enrichment functions to embed the subscription context. The processing of these enriched events will in turn embed the subscription context in to the trace logs captured in X-Ray. The Event Analysis functions will then finally scan all the events on a continual or periodic basis and use the embedded subscription context to extract corresponding trace logs from X-Ray so as to generate usage data aggregated to one minute time scale for compute resources and one GB scale for storage resources. Finally, this resource usage data can be exported to accounting systems for invoicing purposes.

Thus, the proposed “Transactional Infrastructure” paradigm can help SaaS vendors and even enterprises get detailed insights in to their resource utilisation costs and appropriate the same for gaining competitive advantages and/or internal efficiencies thereby generating contributions to the top and the bottom lines.

Advertisement

Best practices for implementing Web Services based APIs

 

Application Programming Interfaces (APIs) have evolved in pace with the computing paradigms from shared statically linked libraries to completely decoupled web service end points. However a common thread across all these forms is that the key to large scale adoption of an API lies in the ease of orchestration across calls to multiple methods/functions contained within. This allows for the API to be highly granular in nature thereby being amenable to use across a wide spectrum of usage scenarios. But as any API developer will attest to this is easier said than done primarily because most classical methods for API distribution including Software Development Kits (SDKs) do not allow for this metadata to be exposed in a dynamic manner that is easy to consume. The best recourse is to include API documentation and to rely on reflection functionality available within the underlying programming language.

Most of the modern Integrated Development Environments (IDEs) leverage this embedded documentation and frameworks that rely on reflection such as the Java Bean Specifications to provide the “IntelliSense” functionality that so many of us today and come to rely upon. But this does not still provide any insights in to the best sequence in which functions within an API are to be invoked so as to accomplish a particular task. Some APIs, notably the DirectX APIs, introduced the concept of a pipeline wherein you could register a series of callback functions that would be invoked in the desired sequence but the programmer still needs to rely on sample code so as to determine the sequencing of function calls required to setup the pipeline. However, the web services based paradigm for developing and hosting APIs has changed the picture significantly and holds the promise to allow for API discoverability and explorability in an unique manner.

At first glance developing APIs based on web service end points offers no further benefits than loose coupling of the interface from the implementation combined with distributed computing that allows for higher scalability. Discoverability of web services based APIs can be achieved through the use of online registries such as APIs.io that rely on an open format called APIs.json to expose suitable metadata about the APIs. This is very similar to repositories for static APIs such as Maven. However, a key benefit to be realized from implementing APIs based on web services is to include additional metadata in the response body that can be auto-interrogated to determine the next service call to be made. One such framework is based on the Hypertext Application Language (HAL) specification and it allows for API explorability if the following best practices are followed:

  • Use of Links & Actions in responses: This could be used to allow for a dynamic and possibly configurable flow for API calls. For RESTful web services the HATEOAS constraint is a great way to implement this functionality.
  • Expose metadata as a separate resource or introduce a meta tag in the response: This could be leveraged to reduce the response size either when multiple items share a bunch of attributes related to say taxonomy or when contextually unnecessary information is always being included in the response. This is to be used differently from attribute selection via query parameters and will require a clear definition of what would constitute as metadata.
  • Consider the use of the OData protocol for exposing data via web services as this allows for programming by convention rather than by contract.

In addition to the above best practices that would simplify API orchestration the following best practices should also be followed to allow for suitable instrumentation in web services based APIs and to ensure highest standards of security, scalability, and backward compatibility:

  • Make call tracing and debugging more efficient by requiring each request to include a timestamp and by assigning a request id to each request received. The request id should be returned in the response and logging of the same along with the timestamp should be encouraged within the client app.
  • Enable response caching, ideally, through the use of ETags or other mechanisms as might be applicable within a given context.
  • Prefer the use of JSON Web Tokens (JWTs) via OAuth 2.0 for implementing security (http://self-issued.info/docs/draft-ietf-oauth-json-web-token.html). Additional links:
    http://tools.ietf.org/html/draft-ietf-oauth-jwt-bearer-07
    https://developers.google.com/accounts/docs/OAuth2ServiceAccount
  • Support expiry and renewal of JWTs via developing appropriate client SDKs.
  • Prefer URL based API versioning since that would help with DevOps automation but query parameter based versioning can also be supported on an as required basis. Also, SDK versioning may be applied as well.

Of course the use of API hosting platforms such as Apigee can certainly help adhere with a number of the best practices defined above when combined with microservices architecture paradigm for implementation.

Architecting solutions for Cluster Computing as opposed to Cloud Computing

Recently, while evaluating storage options as part of a consulting engagement, I came across the Isilion offering from EMC and some of the articles in the associated literature talked about the use of Isilion for cluster computing. Given that the emphasis is still on storage, specifically HDFS, it was intriguing that the possibility of compute functions being delegated to nodes a la Map Reduce was discussed quite a bit. Further reading in to what is considered to be cluster computing got me to the Wikipedia article on Computer Clusters.

So it is quite clear as to what the difference is between cloud computing and cluster computing to the extent that we can even safely say the cluster computing is a subset of cloud computing especially given the offerings from Amazon Web Services such as Elastic Map Reduce and the newly launched Lamda. Hence in this blog article I will focus instead on how a solution needs to be architected to leverage cluster computing effectively to get the best bang for the buck out of cloud computing.

Lets start by addressing the biggest challenge with implementing cluster computing: co-location of data on the compute node. While this is an easy problem to solve while utilizing the Map Reduce paradigm it represents a real challenge to use cluster computing for achieving scalability in the typical usage scenarios. Although the use of technologies such as InfiniBand may be an option in some cases the cost benefit analysis would render it useless for most of the typical business applications.

One immediate option is to utilize microservices based architecture. But it is clear from the description in the seminal article by Martin Fowler that it does not address co-location of transaction data although he does talk about decentralized data management and polygot persistence. Clearly is not really meant to allow for easy adoption in a cluster computing scenario. Interestingly though there is a reference to Enterprise Service Bus as an example of smart end points and that is what got me thinking about extending the concept to cluster computing.

The trick then is to apply the event based programming model to the microservices architecture concept leveraging in turn the smart end points aspect. All the transaction data needs to be embedded in the event combined with any contextual state data. Through the use of interceptors or other adapters the data can be deserialized in to the appropriate service specific representation. This is key since the service need not and actually should not be built to consume the event data structure.

While the approach described above would require you to invest significantly in setting up the requisite infrastructure components to provision compute nodes on the fly to handle events, given the recent release of the AWS Lambda service provides us with an opportunity to apply this concept more easily albeit with some new terminology: microservices are implemented as AWS Lamda functions! It would be very interesting to figure out if argument reduction is supported! Check this blog again in a few weeks to find out…