Categories
AWS Brief 10-Point Guides DevOps

A Brief 10-Point Guide to AWS Elastic Beanstalk

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at AWS Elastic Beanstalk.

  1. Amazon’s Elastic Beanstalk is a service which allows you to deploy applications within an AWS context that involves a number of different AWS services such as S3 and ECR. With Elastic Beanstalk you don’t have to worry about the management and orchestration of these services. You can simply focus on your application.
  2. One of the main benefits of using AWS Elastic Beanstalk is that is it abstracts away the need to deal with infrastructure. You don’t have to worry about configuring load-balancing or scaling your application. You can simply upload your code and go.
  3. Another benefit of using AWS Elastic Beanstalk is that assuming you were planning to host your application on AWS to begin with, there’s no additional cost to using Elastic Beanstalk.
  4. While you have the ability to sit back and let Elastic Beanstalk handling everything for you, AWS does give to the option to configure and control the AWS resources you use for your application. And what’s more, it’s not an all or nothing deal. You could for example decide to manually manage your AWS ECR instances, but leave your S3 instances to be managed by Elastic Beanstalk.
  5. Setting up monitoring and logging tools for your application is often a full-time job in of itself. With Elastic Beanstalk, you don’t have to bother because Elastic Beanstalk comes with a monitoring interface which is integrated with Amazon CloudWatch and AWS X-Ray.
  6. One of the drawbacks of a system that abstracts away the need for management is that understanding when things have gone wrong with Elastic Beanstalk can be a difficult task, because it can be difficult to see the error to diagnose the problem.
  7. An additional drawback of using Elastic Beanstalk is third-party integration. Some of the common culprits like Docker, Jenkins and Github are supported, but don’t expect to find the third-party integration extensive.
  8. One of the pros of AWS Elastic Beanstalk is that you can easily integrate Code pipelines into it, which can enable you to check if your code you’ve just uploaded is working correctly.
  9. Another one of the benefits of the auto-management of AWS Elastic Beanstalk is that unavoidable things like updating versions and operating systems, can be done without any downtime to the application. Furthermore, should something go wrong with one of those updates, it is quite easy to rollback the application to an earlier state. Again, without any downtime to the application (Unless of course the update itself caused downtime)
  10. A final disadvantage of using AWS Elastic Beanstalk is that if you require technical support from Amazon, there is a charge for that. While this is something which is normally to be expected from modern SaaS and PaaS apps. In this case it is something to consider carefully, because of the challenges of not being easily able to diagnose problems with the system as mentioned above.
Categories
AWS Brief 10-Point Guides Databases

A Brief 10-Point Guide to AWS DynamoDB

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at AWS DynamoDB.

  1. Amazon’s DynamoDB is fully managed key-value and document database
  2. As DynamoDB is fully managed, one of the benefits of using it is that there is no server to manage and more importantly DynamoDB auto-scales to the demand on the database as a result the performance of the AWS DynamoDB is consistent at scale.
  3. Performance is another reason to use DynamoDB. Amazon boasts: ‘single-digit millisecond response times at any scale’
  4. Data safety is another benefit of using DynamoDB. Amazon enables instant backups of ‘hundreds of terabytes of data’ with no performance impact. Further, with DynamoDB it is possible to restore the database to any state that existed within the last 35 days. More impressive is that this can be done with no downtime.
  5. A seldom mentioned, but very important benefit of using DynamoDB is that Amazon provides the ability to deploy instances of DynamoDB locally, which is very useful for testing your application. Being able to deploy a local instance of DynamoDB means that you can run your integration tests against that instance, which provides a more valid test context, which means you are more likely to catch bugs that would occur in production therefore enabling you to improve the quality of your code and speed up the development cycle.
  6. Another benefit of using DynamoDB is that it accommodates cases where some data is accessed more than other data while still providing autoscaling. To understand this benefit, it is important to understand that this wasn’t always the case. It used to be the case that with autoscaling just sticking the data in DynamoDB could lead to a problem where performance would be impacted or queries would error. The reason for this is that DynamoDB when autoscaling would shard the data with the assumption that all the data will be accessed with uniform frequency. It did not consider what data is most important. Not all data is created equal. Some data will be accessed far more often than other data. The term for this is a ‘hot key’. In other words, it used to assume that you will access each key in the database roughly an equal number of times, which often isn’t the case. However, now DynamoDB has a feature called adaptive capacity, which, as of 2019, will instantly adapt to your data. In other words the sharding process which occurs during autoscaling distributes your data according to the demands on different keys. This is remarkable. What’s more, this is done at no additional cost and it does not have to be configured.
  7. Perhaps one of the drawbacks of using DynamoDB is that the costs of using it can balloon if you experience spikes in demand or if you haven’t done a good job of predicting what the demand on your database will be. While Amazon does provide a pricing calculator to help you estimate your costs, it is still dependent on your assumptions and estimates. This drawback is really tradeoff for the autoscaling capacity that DynamoDB provides. One of the things you should therefore ask your self is are you expecting fluctuations in demand that would benefit from DynamoDB’s autoscaling?
  8. Another drawback of using DynamoDB is that it has a limit on individual binary items of 400kb. This means that DynamoDB is not well-suited to storing images or large documents.
  9. In DynamoDB, scanning a table is not an efficient operation, which can of course be problem depending on the structure of your data and your use case. In furtherance, this may lead to to incur addition costs over what was initially planned, scanning the whole table in DynamoDB is expensive. This may lead you desire additional indexes, particularly secondary global indexes, which can be provisioned at an additional cost particularly because of the additional cost in writing to DynamoDB.
  10. A final drawback to consider is that write operations to DynamoDB are expensive. Writes to DynamoDB are prices via ‘Write Capacity Units’ per month. When you have a use case with a lot of writes this cost can balloon.
Categories
AWS Brief 10-Point Guides Databases DevOps

A Brief 10-Point Guide to AWS Aurora

If only AWS Aurora provided great streamed hams
AWS Aurora at this time of year, localised entirely in your application stack?

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at AWS Aurora.

  1. Amazon Aurora is a cloud-based database solution (database-as-a-service DBaas) that is compatible with MySQL and PostgreSQL.
  2. Using Amazon Aurora has the benefit that users don’t have to deal with the headaches involved in managing and maintaining physical hardware for their databases.
  3. Another benefit of using Aurora over MySQL and PostgreSQL is the performance. Aurora is around 5 times as fast as MySQL, and around 3 times as fast as PostgreSQL when compared on the same hardware.
  4. With Aurora, the concern about database capacity is eradicated because the storage on Aurora will automatically scale all the way from the 10gb minimum to the 64TB maximum. Meaning, the maximum table size (64TB) using Aurora is four times the size of the maximum table size (16TB) of innodb.
  5. A further benefit of Aurora over MySQL is replication. In Aurora, you can create up to 15 replicas
  6. One of the most prominent drawbacks of AWS Aurora is that it is only compatible with MySQL 5.6 and 5.7.
  7. A somewhat minor drawback is that the port number of connections to the database cannot be configured, they are locked at 3306
  8. Another benefit is that, as you would expect, Aurora has great integration capabilities with other AWS products, for example, you can invoke an AWS Lambda function from within an AWS Aurora database cluster. You can also load data from S3 buckets.
  9. With AWS Aurora, the minimum RDS instance size you can use is r3.large, which will impact the cost of using AWS Aurora.
  10. AWS Aurora is also not available on the AWS free tier

Categories
AWS Brief 10-Point Guides DevOps

A Brief 10-Point Guide to AWS Lambda

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked AWS Lambda.

  1. AWS Lambda lets you run an application without rigging or managing servers. It is an event-driven computing platform. The code is executed in Lambda upon configured event trigger.
  2. AWS Lambda uses a Function as a Service (FaaS) model of cloud computing.
  3. With AWS Lambda you don’t need an application that is running 24/7, as a consequence, you only pay for the time your functions were executing which can lead to a significant cost-reduction against traditional server-based architectures.
  4. Due to the nature of AWS Lambda’s FaaS model of cloud computing, development times for applications can be greatly increased, because the problems of managing an application stack within a server-based context are eliminated.
  5. Planning and management of resources are efforts, which are nearly completely removed with AWS Lambda, because of its auto-scaling when more computing power is needed it will scale up the resources seamlessly, and conversely, when fewer resources are required it will scale down seamlessly too.
  6. A greater proportion of developer time is available for working on the problems and challenges of the business logic with AWS Lambda.
  7. One of the drawbacks of using AWS Lambda is that is is not necessarily faster than traditional architectures, this is because when a new instance of a function is invoked it needs to start up the process with the code to be executed. This start up time is not present in traditional server-based architectures the process or processes are running all the time.
  8. A further drawback of using AWS Lambda is the issue of concurrent executions of functions. The default account limit for concurrent execution of functions is 1000, although depending on region this can go up to 3000, but this has to be specifically requested. Few applications will have to worry about this problem however. One additional thing of note about concurrent executions is that AWS Lambda does allow you to specifically limit the concurrent executions of specific functions.
  9. The current maximum execution time for a AWS Lambda function is 15 minutes, which can be problem if the function is running a task which will take longer. Although this would be a sign that the function should be decomposed where possible.
  10. A further drawback AWS Lambda is that individual functions are somewhat constrained in there access to computational resources. For example the current RAM limit of an AWS Lambda function is 3GB.