Categories
AWS Brief 10-Point Guides DevOps

A Brief 10-Point Guide to AWS Elastic Beanstalk

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at AWS Elastic Beanstalk.

  1. Amazon’s Elastic Beanstalk is a service which allows you to deploy applications within an AWS context that involves a number of different AWS services such as S3 and ECR. With Elastic Beanstalk you don’t have to worry about the management and orchestration of these services. You can simply focus on your application.
  2. One of the main benefits of using AWS Elastic Beanstalk is that is it abstracts away the need to deal with infrastructure. You don’t have to worry about configuring load-balancing or scaling your application. You can simply upload your code and go.
  3. Another benefit of using AWS Elastic Beanstalk is that assuming you were planning to host your application on AWS to begin with, there’s no additional cost to using Elastic Beanstalk.
  4. While you have the ability to sit back and let Elastic Beanstalk handling everything for you, AWS does give to the option to configure and control the AWS resources you use for your application. And what’s more, it’s not an all or nothing deal. You could for example decide to manually manage your AWS ECR instances, but leave your S3 instances to be managed by Elastic Beanstalk.
  5. Setting up monitoring and logging tools for your application is often a full-time job in of itself. With Elastic Beanstalk, you don’t have to bother because Elastic Beanstalk comes with a monitoring interface which is integrated with Amazon CloudWatch and AWS X-Ray.
  6. One of the drawbacks of a system that abstracts away the need for management is that understanding when things have gone wrong with Elastic Beanstalk can be a difficult task, because it can be difficult to see the error to diagnose the problem.
  7. An additional drawback of using Elastic Beanstalk is third-party integration. Some of the common culprits like Docker, Jenkins and Github are supported, but don’t expect to find the third-party integration extensive.
  8. One of the pros of AWS Elastic Beanstalk is that you can easily integrate Code pipelines into it, which can enable you to check if your code you’ve just uploaded is working correctly.
  9. Another one of the benefits of the auto-management of AWS Elastic Beanstalk is that unavoidable things like updating versions and operating systems, can be done without any downtime to the application. Furthermore, should something go wrong with one of those updates, it is quite easy to rollback the application to an earlier state. Again, without any downtime to the application (Unless of course the update itself caused downtime)
  10. A final disadvantage of using AWS Elastic Beanstalk is that if you require technical support from Amazon, there is a charge for that. While this is something which is normally to be expected from modern SaaS and PaaS apps. In this case it is something to consider carefully, because of the challenges of not being easily able to diagnose problems with the system as mentioned above.
Categories
AWS Brief 10-Point Guides Databases DevOps

A Brief 10-Point Guide to AWS Aurora

If only AWS Aurora provided great streamed hams
AWS Aurora at this time of year, localised entirely in your application stack?

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at AWS Aurora.

  1. Amazon Aurora is a cloud-based database solution (database-as-a-service DBaas) that is compatible with MySQL and PostgreSQL.
  2. Using Amazon Aurora has the benefit that users don’t have to deal with the headaches involved in managing and maintaining physical hardware for their databases.
  3. Another benefit of using Aurora over MySQL and PostgreSQL is the performance. Aurora is around 5 times as fast as MySQL, and around 3 times as fast as PostgreSQL when compared on the same hardware.
  4. With Aurora, the concern about database capacity is eradicated because the storage on Aurora will automatically scale all the way from the 10gb minimum to the 64TB maximum. Meaning, the maximum table size (64TB) using Aurora is four times the size of the maximum table size (16TB) of innodb.
  5. A further benefit of Aurora over MySQL is replication. In Aurora, you can create up to 15 replicas
  6. One of the most prominent drawbacks of AWS Aurora is that it is only compatible with MySQL 5.6 and 5.7.
  7. A somewhat minor drawback is that the port number of connections to the database cannot be configured, they are locked at 3306
  8. Another benefit is that, as you would expect, Aurora has great integration capabilities with other AWS products, for example, you can invoke an AWS Lambda function from within an AWS Aurora database cluster. You can also load data from S3 buckets.
  9. With AWS Aurora, the minimum RDS instance size you can use is r3.large, which will impact the cost of using AWS Aurora.
  10. AWS Aurora is also not available on the AWS free tier

Categories
AWS Brief 10-Point Guides DevOps

A Brief 10-Point Guide to AWS Lambda

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked AWS Lambda.

  1. AWS Lambda lets you run an application without rigging or managing servers. It is an event-driven computing platform. The code is executed in Lambda upon configured event trigger.
  2. AWS Lambda uses a Function as a Service (FaaS) model of cloud computing.
  3. With AWS Lambda you don’t need an application that is running 24/7, as a consequence, you only pay for the time your functions were executing which can lead to a significant cost-reduction against traditional server-based architectures.
  4. Due to the nature of AWS Lambda’s FaaS model of cloud computing, development times for applications can be greatly increased, because the problems of managing an application stack within a server-based context are eliminated.
  5. Planning and management of resources are efforts, which are nearly completely removed with AWS Lambda, because of its auto-scaling when more computing power is needed it will scale up the resources seamlessly, and conversely, when fewer resources are required it will scale down seamlessly too.
  6. A greater proportion of developer time is available for working on the problems and challenges of the business logic with AWS Lambda.
  7. One of the drawbacks of using AWS Lambda is that is is not necessarily faster than traditional architectures, this is because when a new instance of a function is invoked it needs to start up the process with the code to be executed. This start up time is not present in traditional server-based architectures the process or processes are running all the time.
  8. A further drawback of using AWS Lambda is the issue of concurrent executions of functions. The default account limit for concurrent execution of functions is 1000, although depending on region this can go up to 3000, but this has to be specifically requested. Few applications will have to worry about this problem however. One additional thing of note about concurrent executions is that AWS Lambda does allow you to specifically limit the concurrent executions of specific functions.
  9. The current maximum execution time for a AWS Lambda function is 15 minutes, which can be problem if the function is running a task which will take longer. Although this would be a sign that the function should be decomposed where possible.
  10. A further drawback AWS Lambda is that individual functions are somewhat constrained in there access to computational resources. For example the current RAM limit of an AWS Lambda function is 3GB.