Software Engineer - Cloud Engineer

Habitly AI

Habitly AI is an intelligent task management application that leverages artificial intelligence to help users build better habits and stay productive. It features a robust cloud-native architecture for scalability and security.

Habitly AI

Technical Stack

AWS S3AWS ECRAWS LambdaAWS CloudFrontAWS CognitoAWS IAMGemini APIDockerTypeScriptReactViteHonoPrismaPostgreSQLGithub Actions

Core Functionalities

Smart features designed for productivity:

  • AI-powered task prioritization and scheduling
  • Automated habit tracking with visual progress reports
  • Secure authentication and user management via AWS Cognito
  • Serverless backend for high scalability and cost-efficiency

System Architecture

The application follows a serverless architecture using AWS Lambda for business logic, S3 and CloudFront for frontend delivery, and Docker containers managed via ECR for specialized services.

System Architecture

Architecture Tradeoffs

This section outlines the strategic decisions made for Habitly AI's technology stack, particularly the Lambda + Prisma + Hono deployment strategy, balancing performance with developer speed.

  • Velocity vs Efficiency: Prioritized Prisma for schema-driven migrations and type-safety, accepting a 200–600ms cold start latency in exchange for rapid development.
  • Scalability vs Connections: Each Lambda container maintains its own database connection; while efficient for early traffic, this requires RDS Proxy or PgBouncer as concurrent usage scales.
  • Serverless Simplicity: Managed AWS services (Lambda, CloudFront, Cognito) drastically reduce operational overhead but offer less runtime control compared to dedicated EC2 instances.
  • Stack Maintainability: Used containerized ECR artifacts and Prisma to ensure the codebase remains portable and can be migrated to EC2/ECS without significant refactoring.
  • Latency vs Features: Traded minor initialization spikes for full ORM capabilities, which significantly reduced the complexity of data operations and improved long-term maintainability.
  • Warm Performance: While cold starts add latency, consecutive requests reuse warm Lambda containers, offering near-instant responsiveness comparable to persistent servers.
  • Operational Insight: Relied on CloudWatch for real-time observability, essential for monitoring cold starts and connection limits in a serverless environment.

Implementation Highlights

A robust engineering culture ensures reliability and speed:

  • CI/CD Pipelines: Automated testing and deployment workflows via GitHub Actions.
  • Infrastructure: Containerized API managed via AWS ECR and deployed to Lambda.
  • Monitoring: Real-time log aggregation and performance tracking using Amazon CloudWatch.
  • CDN: AWS CloudFront for global content delivery.

Challenges & Solutions

Successfully addressing these core engineering challenges was critical for production readiness:

  • Traffic Volatility & Cost: Anticipating fluctuating traffic made traditional EC2 instances cost-inefficient. I chose AWS Lambda to leverage its scale-to-zero capabilities and pay-per-execution model.
  • Security & Compliance: To handle sensitive user authentication data securely and meet compliance requirements, I integrated AWS Cognito for managed identity and access management.
  • Deployment & Weight: The combination of Hono and Prisma increased the application's binary weight, complicating deployment. I utilized Docker and AWS ECR to containerize the app, ensuring consistent and streamlined deployments.

Future Roadmap

Developing a set of features focused on multi-modal AI interactions, including voice-to-schedule and image-based habit tracking.

  • Collaborative AI-managed workspaces
  • Social integration and community messaging
  • Voice-to-schedule AI integration
  • Image-based habit tracking and verification