2020-10-20
Emerging Architectures for Modern Data Infrastructure

We’ll provide a high-level overview of three common blueprints here. We start with the blueprint for modern business intelligence, which focuses on cloud-native data warehouses and analytics use cases. In the second blueprint, we look at multimodal data processing, covering both analytic and operational use cases built around the data lake. In the final blueprint, we zoom into operational systems and the emerging components of the AI and ML stack.

Interesting post on how the architectures are going to evolve

 2020-07-11
Martin Heinz - Personal Website & Blog

Most of the time, what programming really is - is just a lot of trial and error. Debugging on the other hand is - in my opinion - an Art and becoming good at it takes time and experience - the more you know the libraries or framework you use, the easier it gets.

Different ways to debug your python code with logging.

 2020-07-04
Luc Perkins | Blog | Service mesh use cases

Service mesh is a blazing hot topic in software engineering right now and rightfully so. I think it’s extremely promising technology and I’d love to see it widely adopted (in cases where it truly makes sense). Yet it remains shrouded in mystery for many people and even those who are familiar with it have trouble articulating what it’s good for and what it even is (like yours truly)

Good description of all the use cases for the service mesh

 2020-07-04
Scaling the hottest app in tech on AWS and Kubernetes

You shouldn’t pick up a given technology and integrate it with your stack just because it’s the trendy thing. We didn’t jump into the cloud 10 years ago just because it was cool, it felt like we were comparatively late to adopt Kubernetes, we don’t use a service mesh. We’ve let these technologies mature, and now we’re getting real value.

Nice read,gives insight in to the architecture decisions behind hey.com

 2020-07-02
Automating safe, hands-off deployments

Building automated deployment safety into the release process by using extensive pre-production testing, automatic rollbacks, and staggered production deployments lets us minimize the potential impact on production caused by deployments. This means that developers don’t need to actively watch deployments to production.

Extensive blog post on deployment and release methodologies followed at AWS, worth reading for anyone working on CI/CD implementations.

 2020-06-18
No Code

That’s why at the end of the article I can only repeat the trivial: always try to understand what’s going on, always look for practical benefits, rather than catchy words, and always try stay in the middle.

Long and interesting post on Nocode vs Coding for building workflows.Also it has many tools which can used instead of coding one yourself.

 2020-06-14
Building a Kubernetes-based Platform Container Tooling, Progressive Delivery, the Edge, and Observability

As such, it provides a solid foundation on which to support the other three capabilities of a cloud-native platform: progressive delivery, edge management, and observability. These capabilities can be provided, respectively, with the following technologies: continuous delivery pipelines, an edge stack, and an observability stack.

Nice article to read before planning Kubernetes based architectures,with details on tools,processes etc

 2020-06-04
Terraform multiregion deployment

We might come across a scenario where we need to deploy resources in multiple AWS regions as a disastor recovery or backup plan.

Using Terraform for infrastructure as code we can achieve this using alias in Provider.

Primary region

Declare the provider with primary region as usual.

provider "aws" { 
  version = "~> 2.0"
  region  = us-east-1
  profile = "default"
}

Backup region

Declare the second provider with alias which can be used while creating resources.

provider "aws" { 
  alias = "backup" 
  region = ap-southeast-1
  }

Full terraform sample

Now we can proceed using the alias to create resources in primary and backup regions in our case its us-east-1 and ap-southeast-1

provider "aws" { 
  version = "~> 2.0"
  region  = us-east-1
  profile = "default"
}

provider "aws" { 
  alias = "backup" 
  region = ap-southeast-1

  }

resource "aws_instance" "Primary-EC2" {
    ami = var.ami
    instance_type = "t2.micro"
    tags = {
        Name = "Primary-EC2"
    }
}

resource "aws_instance" "Secondary-EC2" {
    provider = aws.backup
    ami = var.ami_backup
    instance_type = "t2.micro"
    tags = {
        Name = "Secondary-EC2"
    }
}


variable "ami" {
  default = "ami-01d025118d8e760db"
}

variable "ami_backup" {
  default = "ami-0fe1ff5007e7820fd"
}

Apply the terraform script to have EC2 machines created in two regions.

source

Read More
 2020-06-02
Welcome to Excursions!

Created a place to write about the things i liked, used and found across the world of web.Every day spent there is an excursion in life, enjoying, learning new things in life on the way.

Read More
 2020-06-02
Welcome to Excursions!

Created a place to write about the things i liked, used and found across the world of web.Every day spent there is an excursion in life, enjoying, learning new things in life on the way.

Read More
 2020-06-04
Terraform multiregion deployment

We might come across a scenario where we need to deploy resources in multiple AWS regions as a disastor recovery or backup plan.

Using Terraform for infrastructure as code we can achieve this using alias in Provider.

Primary region

Declare the provider with primary region as usual.

provider "aws" { 
  version = "~> 2.0"
  region  = us-east-1
  profile = "default"
}

Backup region

Declare the second provider with alias which can be used while creating resources.

provider "aws" { 
  alias = "backup" 
  region = ap-southeast-1
  }

Full terraform sample

Now we can proceed using the alias to create resources in primary and backup regions in our case its us-east-1 and ap-southeast-1

provider "aws" { 
  version = "~> 2.0"
  region  = us-east-1
  profile = "default"
}

provider "aws" { 
  alias = "backup" 
  region = ap-southeast-1

  }

resource "aws_instance" "Primary-EC2" {
    ami = var.ami
    instance_type = "t2.micro"
    tags = {
        Name = "Primary-EC2"
    }
}

resource "aws_instance" "Secondary-EC2" {
    provider = aws.backup
    ami = var.ami_backup
    instance_type = "t2.micro"
    tags = {
        Name = "Secondary-EC2"
    }
}


variable "ami" {
  default = "ami-01d025118d8e760db"
}

variable "ami_backup" {
  default = "ami-0fe1ff5007e7820fd"
}

Apply the terraform script to have EC2 machines created in two regions.

source

Read More
 2020-10-20
Emerging Architectures for Modern Data Infrastructure

We’ll provide a high-level overview of three common blueprints here. We start with the blueprint for modern business intelligence, which focuses on cloud-native data warehouses and analytics use cases. In the second blueprint, we look at multimodal data processing, covering both analytic and operational use cases built around the data lake. In the final blueprint, we zoom into operational systems and the emerging components of the AI and ML stack.

External Link
 2020-07-04
Luc Perkins | Blog | Service mesh use cases

Service mesh is a blazing hot topic in software engineering right now and rightfully so. I think it’s extremely promising technology and I’d love to see it widely adopted (in cases where it truly makes sense). Yet it remains shrouded in mystery for many people and even those who are familiar with it have trouble articulating what it’s good for and what it even is (like yours truly)

External Link
 2020-06-18
No Code

That’s why at the end of the article I can only repeat the trivial: always try to understand what’s going on, always look for practical benefits, rather than catchy words, and always try stay in the middle.

Long and interesting post on Nocode vs Coding for building workflows.Also it has many tools which can used instead of coding one yourself.

External Link
 2020-07-02
Automating safe, hands-off deployments

Building automated deployment safety into the release process by using extensive pre-production testing, automatic rollbacks, and staggered production deployments lets us minimize the potential impact on production caused by deployments. This means that developers don’t need to actively watch deployments to production.

External Link