Transactional Infrastructure – Next generation cloud native architecture to enable innovative pricing models for SaaS products

The advances in PaaS and APaaS offerings from all the major cloud services providers have enabled rapid adoption of the microservices architecture paradigm by SaaS vendors big and small. Furthermore, application of Domain Driven Design methodology has ensured that microservices are well aligned with business models decomposed in to well defined bounded contexts. This has also allowed SaaS vendors and customers alike to assemble complimentary services from different providers to be integrated to support new business models and/or value added services. However, most vendors still continue to base their pricing strategies on a standard multi-tiered per user subscription model.

The advances in per use pricing models based on the serverless deployment mechanisms being provided by the cloud providers (AWS Lambda, Azure Functions, etc.) are certainly helping SaaS vendors to reduce their fixed infrastructure costs and pass on the benefits to the customers; but the complexities involved in tracking per subscriber use of shared resources even at the function level are formidable enough to tackle without some further tweaks to the serverless deployment options for microservices. These complexities further increase significantly when you consider deployments wherein multiple microservices are deployed in to shared containers managed through platforms such as Kubernetes or APaaS offerings such as Pivotal Cloud Foundry.

To address these complexities we introduce this concept of “Transactional Infrastructure”. It allows for the well understood best practices in designing multi-tenancy systems to be incorporated in to the architecture of cloud native applications based on microservices. We shall illustrate this concept in the context of a hypothetical BIaaS product offering hosted in AWS.

BIaaS System Architecture

The figure above illustrates the typical components that would be involved in a BIaaS product offering. For the sake of this article we will ignore accessibility challenges with the source data systems by assuming that a simple VPN tunnel is sufficient to gain pull access to the source data stores. A push based mechanism would need a software appliance to be deployed behind the corporate firewall and as such the computing resources required for the same would be the sole responsibility of the customer.

For the purpose of this article, we define a transaction as a distinct outcome that would deliver tangible value to a business user who would subscribe to this service. Consequently, we shall ignore exploratory work associated with the initial configuration work required to onboard a new customer on to the service which would be charged to the customer as a one-time on boarding fee. We can also possibly ignore subsequent configuration changes and also ad-hoc queries performed by the customer although the later could still be brought within the ambit of a transaction. Thus, a transaction would be a repeatable process that would be executed either at scheduled times or on-demand by any authorised user of the subscribing organisation or an individual. 

A typical implementation of such a system using microservices deployed as Lambda functions on AWS might be as depicted in the figure below.

BIaaS on AWS with SWF

The use of the AWS Simple Workflow Service (SWF) is used to indicate an imperative style of programming implemented to support asynchronous loosely coupled execution. AWS RedShift is proposed as the OLAP data store although other offerings from AWS such as DynamoDB or Relational Data Store (RDS) can also be considered. The AWS Secured Simple Storage Service (Secure S3) is proposed to address security concerns associated with a multi-tenancy BLOB store. It is conceivable that instead of S3 the extracted data could directly be uploaded to a staging area in RedShift for further processing. The auxiliary AWS services such as IAM, Cognito, CloudWatch, SNS, and SES are proposed to address other common multi-tenancy concerns and notification requirements.

While this architecture does support asynchronous loosely coupled execution, the imperative style of implementing business logic favours direct synchronous calls amongst the various microservices with the workflow engine providing the overall orchestration function.

Alternatively, a truly asynchronous execution model can be implemented by adopting an event driven streams based architecture by replacing the SWF service with a queue management service such as AWS Simple Queueing Service (SQS) or Kafka (https://kafka.apache.org/). This is illustrated in the figure below.

BIaaS on AWS with SQS

A detailed discussion around the use of event driven architecture and reactive programming techniques is beyond the scope of this article and interested parties are requested to read other articles on the topic, some of which are listed below:

https://www.infoq.com/news/2017/11/event-sourcing-microservices
https://www.infoq.com/news/2015/06/ddd-events-microservices
https://www.infoq.com/presentations/cloud-native-kafka-netflix
https://www.infoq.com/presentations/reactive-ddd-distributed-systems

As would be apparent from these articles, the event driven implementation architecture and reactive programming techniques are the solution of choice to optimise resource utilisation and as we shall further see are more suitable for incorporating instrumentation to enable tracking of per transaction resource usage. Hence, in the rest of this article we shall focus on the event driven architecture although most of the techniques would apply equally to the former.

A trivial approach to accommodating multi-tenancy in a microservices based architecture would be to create a complete deployment stack image and simply deploy the same across different AWS accounts each dedicated to a single subscribing user and/or organisation. The AWS CloudFormation, OpsWorks and CodeDeploy services can be leveraged appropriately to support this deployment strategy across computing resources available as EC2 instances, Elastic Container Service (ECS), and Lambda functions combined with various storage and other services. The resource consumption can then be easily tracked at a per subscriber level and can be billed at cost plus managed services overheads. However, this will require certain fixed capacity to be reserved for each subscriber which cannot be leveraged for other subscribers. Thus this strategy is not sufficient to serve as a market differentiator for a SaaS vendor.

On the other hand, if all the components are deployed within a single AWS account all resources can be optimally leveraged across all available load at any given time and thus will help the SaaS vendor to minimise their infrastructure costs. However, no suitable instrumentation services are available at the moment to help the vendor track resource utilisation even at a per subscriber level let alone at a per transaction level. Services such as AWS CloudWatch provide instrumentation at a very coarse level and other fine grained monitoring services such as AWS X-Ray or Zipkin (https://zipkin.io/) are primarily distributed tracing and performance monitoring mechanisms that are not equipped to handle transaction context and/or resource utilisation.

The challenges outlined above can be addressed by incorporating the transaction context as a first degree concern within the microservices architecture and by extending the deployment automation mechanisms by incorporating them in to add on SaaS vendor specific deployment services using low level Infrastructure-as-Code APIs exposed by all the cloud services providers. These vendor specific deployment services will automatically inject the subscriber and transaction context in to the deployment metadata so that every invocation of the business services will be able to capture this context within the event data as well as trace logs. A “transaction analysis” service can then be implemented to scan and analyse the events being generated and processed by the business services to determine the resource utilisation and thus the costs associated with each subscriber account and/or the specific transactions of interest. The resulting data can then be passed along to accounting systems to assist with customer invoicing and reconciliation taking in to account specific SLAs and usage tier discounts that the customer may have signed up for. The figure below illustrates the resulting “transactional infrastructure” architecture.

BIaaS on AWS with Trans Infra

As a part of the customer on boarding process the appropriate Deployment functions are invoked for creating subscriber specific Secure S3 buckets and other appropriate “Subscription” context incorporating user identities federated using AWS Cognito. When a transaction is initiated by a specific user this subscription context is utilised appropriately, e.g. for an extract transaction the subscription context will be utilised to choose the appropriate S3 bucket in to which the data is to be staged. Subsequently, all events generated will be enriched by the Event Enrichment functions to embed the subscription context. The processing of these enriched events will in turn embed the subscription context in to the trace logs captured in X-Ray. The Event Analysis functions will then finally scan all the events on a continual or periodic basis and use the embedded subscription context to extract corresponding trace logs from X-Ray so as to generate usage data aggregated to one minute time scale for compute resources and one GB scale for storage resources. Finally, this resource usage data can be exported to accounting systems for invoicing purposes.

Thus, the proposed “Transactional Infrastructure” paradigm can help SaaS vendors and even enterprises get detailed insights in to their resource utilisation costs and appropriate the same for gaining competitive advantages and/or internal efficiencies thereby generating contributions to the top and the bottom lines.

Advertisement

Taming the public cloud beast: Your monthly bill

Beyond a doubt the public clouds have been a godsend to practically all the startups out there today. Plus the ongoing price wars between the major players: Amazon, Microsoft, Google, IBM, has meant that the price per unit of compute/storage/network capacity has been on the decline. Even adoption amongst the traditional large enterprises is on the increase with success stories being written on the hour every hour. So then how does one explain articles such as the one below?

Here’s why this startup ditched Amazon Web Services by John Cook

And this is not an odd one out case. and other similar articles are available albeit they don’t exactly show up on the first page of most search queries pertaining to cloud pricing/costs thanks to the excellent SEO efforts by all the big providers.

The bottom line: Just like dining out every day at an unlimited buffet leads to obesity ad-hoc usage of cloud computing resources leads to a bloated bill that can take many a startup by surprise. Is the answer then to simply jump ship and switch over to a private cloud or worse yet a traditional infrastructure model? All of the players in the private cloud space are trying hard to convince you to do so. Here is an excellent white paper from Eucalyptus to help you take the leap. Maybe not just yet if some of the strategies outlined below are adopted appropriately.

Utilize compute resources for the shortest possible time: Throughout the history of the World Wide Web the one golden rule followed religiously has been: Be always available lest your loyal consumer ditches you the instant your site is down. Given that public clouds rely on sharing the same physical resources across multiple customers it should come as no surprise that the cheapest pricing plan available for the longest time was the one wherein you spun up reserved instances with a guaranteed up time of 99.995% or better. Not only do you end up paying an upfront charge but also the costs can spiral exponentially as you keep adding nodes. To add to your woes as the application and data complexity increases along with the upsurge in customers you start spinning up the more expensive high end instances.

Invest instead in re-architecting your applications to utilize micro instances by adopting a micro services based approach or better still invest in building up your in house DevOps skills to leverage the on-demand and spot pricing plans. The latest PaaS offerings from Amazon such as AWS Lambda, and BlueMix from IBM provide a host of ready to use micro services that can be leveraged on a as needed basis. To add to that the newest auto-scaling offerings from some of the providers also allow you to spin up container based compute instances instead of entire VMs.

Have a crystal clear strategy for processing raw usage data and/or archive it as quickly as you can: Success in boosting site traffic which invariable leads to more business brings with it a deluge of raw usage data that in turn holds the secrets to the next chapter of your growth. Hence it is very tempting to hold on to as much usage data as you can. Plus there may not be a clean separation between transactional and raw usage data. All the cloud providers leverage this aspect of the growth phase of any startup to drive up your monthly spend. Hence it is critical to watch your storage needs very closely and adapt to increasing raw usage data very quickly.

To start with ensure that you can clearly demarcate between transactional data and all other data generated until the time the transaction is actually completed. Also make sure you can easily sift between anonymous usage data and that associated with a known logged in customer. Store all usage data using object based storage services such as AWS S3 limiting each bucket to a relatively short time duration say five minutes and employ data aggregation to reduce data volume by aggregating to a longer time duration say one hour. The key here is not to try and convert the data to a full-fledged data warehouse/mart schema at this stage. Once the raw data has been thus processed it should be archived on a daily basis using solutions such as AWS Glacier. If you don’t have a strategy to further utilize the semi-processed usage data to populate a data warehouse then archive that as well say on a weekly or monthly basis.

Reduce network traffic to the compute instances and between different availability zones: This is probably the most easily overlooked aspect of your monthly bill. Most of the savvy startups will quickly utilize CDN for static content and script caching thereby reducing network traffic to the compute instances hosting the web applications but as your overall cloud infrastructure grows and you start spanning availability zones for ensuring high availability and disaster recovery the corresponding increase in network traffic across availability zones will start adding up quickly. Luckily your startup will have to be wildly successful before this component of the monthly bill will require too much attention and by that time you will be able to afford the real high end talent required to optimize the architecture further.

The kind of monthly spend on public clouds as described in the article referenced at the start of the article represents a dream come true to most of the startups just starting out of the gate but it is always a good idea to start adopting the right strategies and architectures to manage your monthly spend from the very beginning when even a thousand bucks out of your pocket can seem like a million. Furthermore the right architecture will help you eventually transition to a hybrid cloud model at the right time in the future with the least amount of effort and risk. your pocket can seem like a million. Furthermore the right architecture will help you eventually transition to a hybrid cloud model at the right time in the future with the least amount of effort and risk.

This blog was first published on the ContractIQ site at http://blog.contractiq.com/taming-the-public-cloud-beast-your-monthly-cloud-computing-bill/ on December 17, 2014.

Developer Workstations – The untapped Private Cloud

The “Infrastructure as a Service”(IaaS) model for Cloud Computing has matured significantly in the recent years with a large number of providers offering a variety of public, private, and hybrid clouds at very competitive prices. However, adoption of the “Platform as a Service”(PaaS) model is still lagging. The primary reason for this is quite easy to understand: most developers and delivery managers are least impacted by the IaaS model and hence they can continue developing and delivering software the same way as before. Transitioning to a PaaS model on the other hand may involve significant changes to the software development and delivery process and thus represents a significant risk to the overall success of the project.

Challenges to PaaS adoption in Development Teams

One of the key challenge in adopting a PaaS model is the cost of making the target platform available to the entire development team for all functions. Instead, the typical use case is to implement Continuous Integration using PaaS offerings from vendors such as CloudMunch, OpenShift, Heroku, etc. But this still limits the developers from being able to have access to all the features of the target platform and thereby constraining innovation to a small set of “experts”. Clearly, lack of infrastructure should not be a limiting factor for increased innovation and productivity.

Underutilization of Developer Workstations

Due to the exponential growth in computing power combined with ever decreasing cost of hardware, every developer typically is provided with either a laptop or a desktop that has sufficient computing power to host an application server as well as a database and any other required software locally for their own private use. Doing so allows for a number of benefits from telecommuting to ease of adoption of the Agile development methodology. Furthermore, lack of consistent high speed broadband access, either due to poor infrastructure in developing countries or due to network congestion over the airways in developed nations, limits access to shared enterprise PaaS resources for developers who are rarely tethered to a desk. On the other hand synchronization through a centralized code repository requires very little bandwidth. Finally, these local environments will eventually start varying from each other and more importantly from the target environment thereby introducing the risk that submissions from different team members may not play well together in the CI environment.

Cloud Infrastructure Management Frameworks to the rescue …

Private clouds based on open source software such as OpenStack or CloudStack may offer a solution wherein each developer workstation is a virtual machine host on to which an image of the target platform can be launched on demand. These images can be auto generated as a part of the daily CI cycle. A developer thus would have access to the overall platform so as to either try out new features or make configuration changes to resolve certain issues. The same infrastructure can also be leveraged to provide additional computing resources for the CI cycle or for load testing.

But not without some additional innovation

At present, the primary constraint of the open source cloud management frameworks is the homogeneity of the host hardware. This severely limits the use of developer workstations as hosts. Furthermore, the use of bare metal hypervisors is not feasible and the most popular operating systems for developer workstations are not ideally suitable for hosting type 2 hypervisors. Instead, as a first step, it is recommended that an approach based on desktop virtualization products such as VirtualBox or VMware Player be adopted. As a part of the daily CI build a new version of target environment can be packaged as an appliance and be made available to developers for download. Thus, in addition to getting the latest code at the start of the day, developers can also get the latest target environment and fire it up on their workstations. Additionally, some basic agent software can be developed to allow the developer to add their local guest OS instance to a resource pool for use in intensive computing tasks such as load testing. Simultaneously, open source frameworks can be extended to allow for a more heterogeneous mix of hosts.

This blog was originally published at http://www.compassitesinc.com/blogs/developer-workstations on January 10, 2013