Categories
React Typescript

Create a Todo App with Typescript and React

https://youtu.be/uTzZ7Ca8pSo

Get the backend to follow along with this video here:


Categories
Brief 10-Point Guides

A Brief 10-Point Guide to GraphQL

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at GraphQL.

  1. GraphQL can be best thought of as a middleware between your clients and your servers. The idea behind GraphQL is to have a system which sits between your clients and your API and allows the requesting of data through itself. It has an in-built query language which allows your to create requests and GraphQL will handle the fetching and composing of the data.
  2. One of the primary reasons to use GraphQL is that it simplifies dealing with your backend APIs. For example, a common pattern to find in applications is the need to request data from two or more distinct sources and compose them to be processed together. For example, you might have images stored in one database and text stored in another. Using GraphQL enables you to have a single query to get the data from both of the sources and return a single response, which greatly simplifies your code has you don’t have to deal with the complexities of handling multiple requests and waiting all of them before processing the data together.
  3. One of the oft-quoted benefits of GraphQL is that it decouples you from your existing API implementation. This means that with GraphQL you can easily swap out the implementation of your existing API for another one and everything will continue hunky dory. However, the seldom mentioned caveat to this is that you are necessarily coupling yourself to GraphQL itself
  4. Data savings are another benefit of using GraphQL. With GraphQL, you don’t need to make multiple calls to your API as you would with REST. You can query for only the data you want: GraphQL can aggregate data from multiple sources for you and it will only return the data that you need. The result of this is data savings for each API call, which leads to improved performance of your application overall.
  5. GraphQL enables increased speed of development. The reason for this is that because GraphQL decouples your client from your API, different teams also become decoupled, meaning there is a reduced need of coordination between teams to get development done. To give an example, instead of the frontend team needing to coordinate with the backend team to update or add a new REST endpoint, they can simply develop their new frontend functionality using GraphQL to get the exact data they need.
  6. Another benefit of using GraphQL is that there is reduced need for filtering data. Often when developing a new feature of an application, a developer will have to filter and aggregate existing data to generate the new data required for the new feature. All of this leads to additional code, which incurs additional costs both in terms of development time and maintenance time. With GraphQL this is unnecessary as GraphQL handles the aggregation and filtering of data for free.
  7. One of the drawbacks of GraphQL is that there is a need for a fair amount of boilerplate code. For example you will often need to write a Resolver, a query, a mutation, and a schema.
  8. Benefit – Strong typing – GraphQL’s query language is strongly typed. This is benefit because it provides a common contract for communication between the client of an application and the backend. This helps facilitate independent development of both the frontend and backend because the strong typing of GraphQL provides a solid and predictable foundation on which to base the target of future system states. In other words, developer can write new code knowing that GraphQL will be able to provide the data they require and they will know the format of that data.
  9. Another disadvantage of GraphQL is that error handling is somewhat more difficult and cumbersome than it is with REST. If you GraphQL query errors, it doesn’t return a 5xx error code or even a 4xx error code, but instead it always returns a 200 success code. The error will be in the JSON response itself. This is weird. It’s something that you should be aware of because if you are considering migrating your system to GraphQL, this quirk will likely increase the time required to do so because you will have to change your error handling and any code that specifically checks for standard error codes. This will likely mean changing many of your test code as well.
  10. A final consideration of GraphQL is the complexity of it. Now this isn’t really a disadvantage or an advantage. The reason being is that it isn’t particularly complex, however it is more complex than a standard REST API. If you know that your application is going to have a relatively simple and stable API over time, then you might be better sticking with REST. If you are unsure about this, I would be better to use GraphQL from the start as the complexity involved in migrating from REST to GraphQL once an application is midway through development wouldn’t be the quickest process, you’d have to define all your schemas, mutations, queries, and resolvers. Plus you’d potentially have to write new tests and learn a whole new testing API.
Categories
Redis

Redis Quick Fix | (error) WRONGTYPE Operation against a key holding the wrong kind of value

I recently ran into this Redis error when running the XREAD command on a Redis Stream.

This is the error I got:

(error) WRONGTYPE Operation against a key holding the wrong kind of value

This error from Redis baffled me.

After far too much time trying to figure out what went wrong, I finally realised that I had accidentally created a Redis hash with the same key as the stream I was trying to access.

What the error is saying is that we are trying to use an operation on a Redis type that doesn’t have that operation. In other words, we are trying to run a stream command on a hash or a hash command on a stream. The reason that this has happened is most likely due to having accidentally mixed up your stream and hash keys.

I explain with examples in the below video:

https://youtu.be/_ZKVf3tLGfM
Categories
AWS Brief 10-Point Guides DevOps

A Brief 10-Point Guide to AWS Elastic Beanstalk

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at AWS Elastic Beanstalk.

  1. Amazon’s Elastic Beanstalk is a service which allows you to deploy applications within an AWS context that involves a number of different AWS services such as S3 and ECR. With Elastic Beanstalk you don’t have to worry about the management and orchestration of these services. You can simply focus on your application.
  2. One of the main benefits of using AWS Elastic Beanstalk is that is it abstracts away the need to deal with infrastructure. You don’t have to worry about configuring load-balancing or scaling your application. You can simply upload your code and go.
  3. Another benefit of using AWS Elastic Beanstalk is that assuming you were planning to host your application on AWS to begin with, there’s no additional cost to using Elastic Beanstalk.
  4. While you have the ability to sit back and let Elastic Beanstalk handling everything for you, AWS does give to the option to configure and control the AWS resources you use for your application. And what’s more, it’s not an all or nothing deal. You could for example decide to manually manage your AWS ECR instances, but leave your S3 instances to be managed by Elastic Beanstalk.
  5. Setting up monitoring and logging tools for your application is often a full-time job in of itself. With Elastic Beanstalk, you don’t have to bother because Elastic Beanstalk comes with a monitoring interface which is integrated with Amazon CloudWatch and AWS X-Ray.
  6. One of the drawbacks of a system that abstracts away the need for management is that understanding when things have gone wrong with Elastic Beanstalk can be a difficult task, because it can be difficult to see the error to diagnose the problem.
  7. An additional drawback of using Elastic Beanstalk is third-party integration. Some of the common culprits like Docker, Jenkins and Github are supported, but don’t expect to find the third-party integration extensive.
  8. One of the pros of AWS Elastic Beanstalk is that you can easily integrate Code pipelines into it, which can enable you to check if your code you’ve just uploaded is working correctly.
  9. Another one of the benefits of the auto-management of AWS Elastic Beanstalk is that unavoidable things like updating versions and operating systems, can be done without any downtime to the application. Furthermore, should something go wrong with one of those updates, it is quite easy to rollback the application to an earlier state. Again, without any downtime to the application (Unless of course the update itself caused downtime)
  10. A final disadvantage of using AWS Elastic Beanstalk is that if you require technical support from Amazon, there is a charge for that. While this is something which is normally to be expected from modern SaaS and PaaS apps. In this case it is something to consider carefully, because of the challenges of not being easily able to diagnose problems with the system as mentioned above.
Categories
Java Spring Boot Web Scraping for Beginners

Web Scraping Dynamic Websites with Java and Selenium

https://youtu.be/PF0iyeDmu9E

Get the starter code for this tutorial:


Categories
Python

Coding Richard Dawkins’ ‘Me Thinks It is a Like a Weasel’ in Python

https://www.youtube.com/watch?v=WekRrOXZWvk
#!/usr/bin/env python3
import random
import string

TARGET = ''
TARGET_LEN = 0
LETTERS = string.ascii_uppercase + ' ' + string.punctuation
GENERATIONS = 100
GENERATION_POPULATION = 100

allGenerations = []

bestString = {
    'string': '',
    'base_for_new_generation': '',
    'score': 0
}

def randomString(stringLength):
    return ''.join(random.choice(LETTERS) for i in range(stringLength));

def replaceNoneMatchesWithRandomChar():
    tempString = bestString['base_for_new_generation']
    for i, letter in enumerate(tempString):
        if letter == '*':
            tempString = switchChar(tempString, i, random.choice(LETTERS))
    return tempString

def switchChar(string, index, rand_char):
    return string[:index]+rand_char+string[index+1:]

def createNewGeneration():
    constructNewBaseForGeneration(bestString['string'])
    allGenerations.append([replaceNoneMatchesWithRandomChar() for _ in range(GENERATION_POPULATION)])

def constructNewBaseForGeneration(bestMatch):
    base_for_new_generation = ''
    for i in range(TARGET_LEN):
        if TARGET[i] == bestMatch[i]:
            base_for_new_generation = base_for_new_generation + bestString['string'][i]
        else:
            # No match so replace with '*' to make replacing with random char easier
            # This enables us to create x number of strings based on the same parent
            base_for_new_generation = base_for_new_generation + '*'
    bestString['base_for_new_generation'] = base_for_new_generation

def scoreString(string):
    score = 0
    for i in range(TARGET_LEN):
        if TARGET[i] == string[i]:
            score = score + 1
    return score

def updateBestString(string, score):
    bestString['string'] = string
    bestString['score'] = score


def checkMatch():
    latest_generation = allGenerations[-1]
    for string in latest_generation:
        score = scoreString(string)
        if score > bestString['score']:
            updateBestString(string, score)

def createInitialStrings():
    allGenerations.append([randomString(TARGET_LEN) for _ in range(GENERATION_POPULATION)])

def printBestString():
    print('Generation ' + str(len(allGenerations)))
    print('Current best string ' + bestString['string'])


def monkeys(target):
    global TARGET 
    TARGET = target
    global TARGET_LEN 
    TARGET_LEN = len(TARGET)
    createInitialStrings()
    checkMatch()
    printBestString()
    for i in range(GENERATIONS):
        if bestString['string'] == TARGET:
            printBestString()
            print('Script Complete Matched in ' + str(len(allGenerations)) + ' generations')
            return len(allGenerations)
        createNewGeneration()
        checkMatch()
        printBestString()


if __name__ == '__main__':
    monkeys(input('Enter a string for the monkeys to type: ').upper())
Categories
AWS Brief 10-Point Guides Databases

A Brief 10-Point Guide to AWS DynamoDB

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at AWS DynamoDB.

  1. Amazon’s DynamoDB is fully managed key-value and document database
  2. As DynamoDB is fully managed, one of the benefits of using it is that there is no server to manage and more importantly DynamoDB auto-scales to the demand on the database as a result the performance of the AWS DynamoDB is consistent at scale.
  3. Performance is another reason to use DynamoDB. Amazon boasts: ‘single-digit millisecond response times at any scale’
  4. Data safety is another benefit of using DynamoDB. Amazon enables instant backups of ‘hundreds of terabytes of data’ with no performance impact. Further, with DynamoDB it is possible to restore the database to any state that existed within the last 35 days. More impressive is that this can be done with no downtime.
  5. A seldom mentioned, but very important benefit of using DynamoDB is that Amazon provides the ability to deploy instances of DynamoDB locally, which is very useful for testing your application. Being able to deploy a local instance of DynamoDB means that you can run your integration tests against that instance, which provides a more valid test context, which means you are more likely to catch bugs that would occur in production therefore enabling you to improve the quality of your code and speed up the development cycle.
  6. Another benefit of using DynamoDB is that it accommodates cases where some data is accessed more than other data while still providing autoscaling. To understand this benefit, it is important to understand that this wasn’t always the case. It used to be the case that with autoscaling just sticking the data in DynamoDB could lead to a problem where performance would be impacted or queries would error. The reason for this is that DynamoDB when autoscaling would shard the data with the assumption that all the data will be accessed with uniform frequency. It did not consider what data is most important. Not all data is created equal. Some data will be accessed far more often than other data. The term for this is a ‘hot key’. In other words, it used to assume that you will access each key in the database roughly an equal number of times, which often isn’t the case. However, now DynamoDB has a feature called adaptive capacity, which, as of 2019, will instantly adapt to your data. In other words the sharding process which occurs during autoscaling distributes your data according to the demands on different keys. This is remarkable. What’s more, this is done at no additional cost and it does not have to be configured.
  7. Perhaps one of the drawbacks of using DynamoDB is that the costs of using it can balloon if you experience spikes in demand or if you haven’t done a good job of predicting what the demand on your database will be. While Amazon does provide a pricing calculator to help you estimate your costs, it is still dependent on your assumptions and estimates. This drawback is really tradeoff for the autoscaling capacity that DynamoDB provides. One of the things you should therefore ask your self is are you expecting fluctuations in demand that would benefit from DynamoDB’s autoscaling?
  8. Another drawback of using DynamoDB is that it has a limit on individual binary items of 400kb. This means that DynamoDB is not well-suited to storing images or large documents.
  9. In DynamoDB, scanning a table is not an efficient operation, which can of course be problem depending on the structure of your data and your use case. In furtherance, this may lead to to incur addition costs over what was initially planned, scanning the whole table in DynamoDB is expensive. This may lead you desire additional indexes, particularly secondary global indexes, which can be provisioned at an additional cost particularly because of the additional cost in writing to DynamoDB.
  10. A final drawback to consider is that write operations to DynamoDB are expensive. Writes to DynamoDB are prices via ‘Write Capacity Units’ per month. When you have a use case with a lot of writes this cost can balloon.
Categories
Finance Finance and Investing Java Java For Finance Spring Boot

Getting Stock Data with Java & Spring Boot | Java For Finance

https://youtu.be/aDpjiCLr4KM

Get the code to start the project here:


Categories
AWS Brief 10-Point Guides Databases DevOps

A Brief 10-Point Guide to AWS Aurora

If only AWS Aurora provided great streamed hams
AWS Aurora at this time of year, localised entirely in your application stack?

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked at AWS Aurora.

  1. Amazon Aurora is a cloud-based database solution (database-as-a-service DBaas) that is compatible with MySQL and PostgreSQL.
  2. Using Amazon Aurora has the benefit that users don’t have to deal with the headaches involved in managing and maintaining physical hardware for their databases.
  3. Another benefit of using Aurora over MySQL and PostgreSQL is the performance. Aurora is around 5 times as fast as MySQL, and around 3 times as fast as PostgreSQL when compared on the same hardware.
  4. With Aurora, the concern about database capacity is eradicated because the storage on Aurora will automatically scale all the way from the 10gb minimum to the 64TB maximum. Meaning, the maximum table size (64TB) using Aurora is four times the size of the maximum table size (16TB) of innodb.
  5. A further benefit of Aurora over MySQL is replication. In Aurora, you can create up to 15 replicas
  6. One of the most prominent drawbacks of AWS Aurora is that it is only compatible with MySQL 5.6 and 5.7.
  7. A somewhat minor drawback is that the port number of connections to the database cannot be configured, they are locked at 3306
  8. Another benefit is that, as you would expect, Aurora has great integration capabilities with other AWS products, for example, you can invoke an AWS Lambda function from within an AWS Aurora database cluster. You can also load data from S3 buckets.
  9. With AWS Aurora, the minimum RDS instance size you can use is r3.large, which will impact the cost of using AWS Aurora.
  10. AWS Aurora is also not available on the AWS free tier

Categories
AWS Brief 10-Point Guides DevOps

A Brief 10-Point Guide to AWS Lambda

When I’m learning new about new technologies, one of the first things, I’ll do is write a numbered list 1-10 and then fill that numbered list in with points about that particular technology. I realised that these lists might be useful to others so I’m going to start posting them. This week I looked AWS Lambda.

  1. AWS Lambda lets you run an application without rigging or managing servers. It is an event-driven computing platform. The code is executed in Lambda upon configured event trigger.
  2. AWS Lambda uses a Function as a Service (FaaS) model of cloud computing.
  3. With AWS Lambda you don’t need an application that is running 24/7, as a consequence, you only pay for the time your functions were executing which can lead to a significant cost-reduction against traditional server-based architectures.
  4. Due to the nature of AWS Lambda’s FaaS model of cloud computing, development times for applications can be greatly increased, because the problems of managing an application stack within a server-based context are eliminated.
  5. Planning and management of resources are efforts, which are nearly completely removed with AWS Lambda, because of its auto-scaling when more computing power is needed it will scale up the resources seamlessly, and conversely, when fewer resources are required it will scale down seamlessly too.
  6. A greater proportion of developer time is available for working on the problems and challenges of the business logic with AWS Lambda.
  7. One of the drawbacks of using AWS Lambda is that is is not necessarily faster than traditional architectures, this is because when a new instance of a function is invoked it needs to start up the process with the code to be executed. This start up time is not present in traditional server-based architectures the process or processes are running all the time.
  8. A further drawback of using AWS Lambda is the issue of concurrent executions of functions. The default account limit for concurrent execution of functions is 1000, although depending on region this can go up to 3000, but this has to be specifically requested. Few applications will have to worry about this problem however. One additional thing of note about concurrent executions is that AWS Lambda does allow you to specifically limit the concurrent executions of specific functions.
  9. The current maximum execution time for a AWS Lambda function is 15 minutes, which can be problem if the function is running a task which will take longer. Although this would be a sign that the function should be decomposed where possible.
  10. A further drawback AWS Lambda is that individual functions are somewhat constrained in there access to computational resources. For example the current RAM limit of an AWS Lambda function is 3GB.