Technical Lead at Aledade Inc.View my skills Contact Me
Technical Lead at Aledade Inc.
April 2022 - Present
B.S. in Computer Information Technology @ CSU Northridge, with Honors
Upper Division GPA: 3.94.
Minor in Religious Studies.
Refactored a large portion of our code that handles postgres users, roles and servers. Reduced code footprint of our databases by 100,000 lines. Created new automation to automatically sync user's Active Directory roles with their database roles. This new automation integrates with Snowflake to apply role changes. Introduced and implemented Terragrunt for database management.
Created a Python script that integrates with multiple vendor's APIs for automated user removal when a user leaves the company. Reduces toil of previous manual user management process (which took around an hour per user to delete).
Static Code Analysis
Leading a project to host Sonarqube on AWS EKS.
Created parallel deployments in production and nonproduction EKS clusters. Created RDS Postgres databases connected to the cluster. Leveraged Terraform for infrastructure creation across all projects.
Configured Bamboo and Bitbucket plugins to run static code analysis automatically when a pull request is created, report the outcome onto the pull request in Bitbucket, and back in Sonarqube.
Improving our Deployments
Implemented a blue-green deployment for our highest-traffic production application to reduce 30 seconds of downtime to 0 whenever a new version is deployed.
Used an Ansible playbook to bring up a parallel deployment with automatic rollback, functionally eliminating downtime for the application whenever a new version is deployed.
Created a Python function for AWS Lambda to automatically tag snapshots to match their EBS volume counterparts.
Untagged snapshots were responsible for 90% of untraced spend on AWS on the day of the script. After the script ran, all of the spend was properly tagged under each project.
6,500 snapshots initially changed, with 250+ additionally being tagged by the script each day.
Updating our Infrastructure
Updated Helm2 to Helm3 across production and nonproduction EKS clusters, including removing tiller from all namespaces.
Hosted knowledge-sharing sessions with other team members, including initial onboarding for a new employee.
PSI Services LLC.
Migrated local servers to AWS
One of my primary responsibilities included migrating on-premise applications to AWS EKS and EC2.
Two major migrations I completed were PSI's business rules engine and SSO solution. These were both migrated from an on-premise DC/OS cluster to AWS EKS, including creation of RDS databases with secret rotation, and Kubernetes secrets.
Migrated production and nonproduction Jira instances from on-premise virtualized servers to AWS EC2 using Packer and Terraform. The databases were also moved from on-prem to RDS Postgres.
Containerized and migrated various in-house Java TomEE servers from virtual servers to DC/OS.
All infrastructure was created as code using Terraform, and deployed to 2-5 environments depending on the application.
Implemented Continuous Integration and Delivery
As a part of these migrations, CI-CD pipelines were also created. Jenkins was used with a company-wide shared library and additional groovy scripts for building apps. As a part of this process, Dockerfiles were built, and apps were deployed via Buildmaster using Helm charts or plain Kubernetes charts. Buildmaster also allowed for Bash and Powershell scripts, which I also leveraged as a part of the pipelines.
Created various CI-CD pipelines for new and existing applications, which previously required manual deployment.
Substantial improvements to Monitoring and Alerting
Implemented a monitoring solution for 200+ websites and servers. The websites are monitored via HTTP requests, while the servers each have an agent. Agents were installed via Rundeck. Agents were installed on both Windows and Linux machines. Some of these servers required custom Python and Nagios plugins, which I also created.
Migrated the monitoring solution away from Icinga, with no downtime required and no lapse of monitoring between solutions.
Leveraged OpsGenie as a solution for alerting and on-call rotation. The OpsGenie integration was also set up to create a Jira ticket whenever a high priority alert was created.
Terraform AWS EKS deployment
Created a Terraform EKS deployment with node autoscaling, RBAC config generation, multiple namespaces, Kubernetes load balancer definition support (ingress + service) and automatic Route 53 Private Hosted Zone configuration.
MetroLink Advisory Emailer
Created a Python script that utilizes the Requests library to track train delays posted on the Metrolink website. If a delay is detected on one of a list of specified train numbers, an alert will be sent to the user's email.
Website deployment on AWS EKS
I created a Kubernetes deployment for my website using AWS, Terraform, Python, Docker and Kubectl. I created various Terraform modules for AWS services (EKS, ASG, ECR, VPC, IAM).
I used Python with Boto3 and subprocesses for configuration. First, I pulled the kubeconfig from EKS, then applied a configuration map using kubectl. Next, I built a Docker image from a Dockerfile (which contains my website code on an Apache2 server) and deployed it to ECR. Finally, I created a service using kubectl, and exposed the service with an external load balancer (ELB) and public IP address.
DevOps Capstone Project
I developed AWS infrastructure in an agile environment an worked alongside a team of peers. I built and documented reusable Terraform modules for AWS infrastructure. I also automated Docker image building and deployment from GitLab to AWS ECS using GitLab CI.
I implemented Datadog monitoring with an AWS Lambda function, a Datadog ECS agent and Datadog AWS integration with an IAM role. Alongside this, I trained a less experienced team member in advanced Terraform topics such as Terraform modules and container orchestration using ECS.
I worked with a group of peers to develop AWS infrastructure as part of a senior-design class. During the class, I automated static website delivery from GitHub to an S3 bucket using CircleCI.
After this, I deployed the website to AWS Cloudfront. I also obtained TLS certificates using Let's Encrypt and AWS Certificate Manager. We also created EC2 instances with Terraform, which I configured using Ansible.
About this Site
I developed this website using HTML, CSS, and Font-Awesome. The website is hosted using GitLab pages, with Cloudflare as a DNS provider and caching solution. It also has a TLS origin certificate from Cloudflare. The domain was purchased from NameCheap.
I also continue to update the website regularly with my progress, and add new features,
including adding a dark-mode version that displays automatically based on your device preferences
, using CSS's new:
@media (prefers-color-scheme: dark).
I also deployed this website on DigitalOcean for testing. First, I created a droplet with a floating IP. Then, I created a firewall to allow for HTTP, HTTPS and SSH traffic. I also created a multistage CI-CD pipeline with GitLab CI, utilized the GitLab registry to store an intermediate container image. Finally, the code got deployed to DigitalOcean through the CI-CD pipeline using sshpass.
DevOps + Linux Sysadmin Assignments
As part of my capstone project, we were given weekly assignments. Some examples include creating a LAMP stack image using a Dockerfile, and setting up Prometheus and Node Exporter using a Dockerfile. I completed a write-up for each assignment which contains further explanation of the task and my solution.