Muhammad Nawaz

Application Developer

Software Developer

Data Scientist

Project Manager

About Me

Hey, there Waving hand emoji greeting I'm Nawaz, a seasoned QA Automation and Chrome Extension Developer with over 4 years of hands-on experience. I specialize in delivering high-quality automation solutions and web scraping expertise across various domains, including eCommerce, Social Media, and more. My skills span Python, Java, JavaScript, and multiple automation tools, allowing me to design and optimize workflows to meet complex project needs. Passionate about ensuring software achieves the highest quality standards, I bring a robust portfolio showcasing my expertise in both manual and automation testing.

  • Location: Lahore, Pak
  • Languages: English, Urdu/Hindi
  • Skills:
    n8n
    N8N
    Make.com
    selenium
    Selenium (Advanced)
    Created with Fabric.js 5.2.4
    Cypress
    Created with Fabric.js 5.2.4
    Playwright
    Appium
    TestNG
    Created with Fabric.js 5.2.4
    JUnit
    file_type_jest_snapshot
    Jest
    Performance Testing (JMeter and K6)
    python [#127] Created with Sketch.
    Python (Expert)
    Scrapy for Data Scraping Scrapy
    Scrapy
    Json XML CSV
    JSON, XML, CSV Parsing
    Flutter
    Cross-platform App Development
    Firebase
    Firebase Backend Development
    Chrome
    Custom Browser Extension Development
    n8n
    Workflow Automation N8N/Make.com
    JavaScript (Advanced)
    HTML
    CSS
    tensorflow
    TensorFlow
    pytorch
    PyTorch
    AI Automation
    Agentic Web Automation
    Agentic Web Scraping
    AI Calling Agent
    python [#127] Created with Sketch.
    Python
    Java
    dart
    Dart
    AWS
    Google Cloud
    path21
    Azure
    Jenkins
    bitbucket
    Bitbucket
    github [#142] Created with Sketch.
    Github
    GitLab
My Services
n8n
Workflow Automation N8N/Make.com

Design and implement automated workflows using N8N and Make.com to streamline business processes, integrate systems, and automate repetitive tasks.

Key Features
  • Custom workflow design and implementation for business process automation using N8N and Make.com
  • Integration with REST APIs, webhooks, and various third-party services
  • Data transformation and manipulation using N8N's expression editor, Make.com's data mapping, and custom JavaScript/Python code
  • Workflow optimization, error handling, and monitoring for reliable automation
  • Deployment and scaling of workflows (self-hosted N8N instances or cloud-based Make.com scenarios)

App Developement

Design and build custom applications to meet specific user needs and business goals, ensuring functionality, scalability, and a great user experience.

Key Features
  • Create tailored applications for web, mobile, or desktop platforms with a focus on user-centric design.
  • Develop both front-end and back-end components using modern technologies and frameworks.
  • Integrate apps with third-party services, APIs, and databases to enhance functionality and user experience.
  • Conduct thorough testing to identify and fix bugs, and optimize app performance for smooth operation.

Data Scientist

Apply advanced analytical techniques and machine learning models to extract insights from data, drive decision-making, and solve complex problems.

Key Features
  • Perform exploratory data analysis (EDA) to uncover patterns, trends, and anomalies.
  • Develop and implement predictive models and algorithms using tools like Python and R.
  • Create compelling visualizations and dashboards to present data insights in an accessible and actionable format.
  • Ensure data quality by cleaning, transforming, and preparing data for analysis and modeling.

QA Automation

Streamline testing processes by creating automated testing frameworks that ensure software quality and reliability.

Key Features:
  • Development of comprehensive test scripts for various platforms (web, mobile, etc.)
  • Utilization of advanced tools like Selenium, Cypress, and TestNG
  • Continuous integration and delivery support for automated testing
  • Detailed test reports and bug tracking

Web Scraping

Extract valuable data from websites efficiently, providing actionable insights and data integration for business needs.

Key Features:
  • Design and implementation of custom scraping solutions
  • Handling complex scraping scenarios (e.g., dynamic content, anti-scraping mechanisms)
  • Data cleaning and transformation to ensure accuracy
  • Integration of scraped data into databases or other systems

Chrome Extension Development

Develop or customize Chrome extensions to enhance browser functionality and user experience.

Key Features
  • Creation of extensions tailored to specific user requirements
  • Integration with web services and APIs for extended capabilities.
  • UI/UX design to ensure a seamless user experience
  • Ongoing maintenance and updates to meet evolving needs.

AI Automation

Design and implement AI-powered automation workflows that combine LLMs, agents, and tools to streamline business processes and decision-making.

Key Features:
  • LLM-integrated workflows (OpenAI, Claude, etc.)
  • Agentic pipelines with tools and reasoning
  • End-to-end process automation with AI
  • Integration with N8N, Make.com, and custom APIs

Agentic Web Automation

Intelligent browser and web automation using AI agents that navigate, interact, and adapt to dynamic pages and workflows.

Key Features:
  • AI-driven browser automation and RPA
  • Dynamic page handling and form interaction
  • Multi-step workflows with error recovery
  • Integration with Playwright, Puppeteer, and agent frameworks

Agentic Web Scraping

AI-powered data extraction that adapts to site structure changes, handles JavaScript-rendered content, and normalizes data at scale.

Key Features:
  • Agent-based scraping with LLM reasoning
  • Handling of dynamic and anti-bot protected sites
  • Structured output (JSON, CSV) and data cleaning
  • Scalable pipelines with rate limiting and proxies

AI Calling Agent

Voice and call automation using AI agents for inbound/outbound calls, IVR, and conversational AI over phone and VoIP.

Key Features:
  • AI voice agents and IVR integration
  • Outbound/inbound call automation and routing
  • Integration with Twilio, Vapi, and speech APIs
  • Conversation logging and analytics

Pricing
Essential
$ 25 hour
  • Automation Scripts
  • Installation & Configuration
  • Basic Web Scraping
  • Automation Scripts
  • Test Case Development
  • Maintenancenot included
Advanced/Customized
$ 35 hour
  • App Development
  • Installation & Configuration
  • Patch Management
  • Multi-User Management
  • Complex Web Scraping
  • Data Integration Solutions
  • Advanced Chrome Extension Development
  • Maintenance included

Testimonials

Resume
Experience
2023 - Present
Software Engineer
Upwork

Specializing in QA Automation, Web Scraping, Flutter Apps, Chrome Extensions, and Machine Learning solutions..

learn more
2021 - 2023
Software Engineer
CodeAutomation.AI

Lead a team of automation engineers, manage all stages of automation projects, and develop, test, and maintain automation infrastructure. I oversee code reviews, update test strategies, and train employees on automation tools while ensuring thorough documentation and implementing process improvements.

learn more
2018 - 2021
App Product Developer
Logics

At Logics I.T. Training Center, I trained interns and short course students, developed and tested over 50+ high-quality Android apps with various features, and ensured content met business and client requirements. I wrote clean, efficient code, articulated technical risks to stakeholders, and maintained high standards of app functionality and integration.

learn more
Education
Dec 2024
𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐞𝐝 𝐒𝐨𝐟𝐭𝐰𝐚𝐫𝐞 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫
HackerRank

Validated skills in software engineering principles and practical implementation.

Certificate
Dec 2024
𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐒𝐨𝐥𝐯𝐢𝐧𝐠 𝐒𝐩𝐞𝐜𝐢𝐚𝐥𝐢𝐬𝐭
HackerRank

Expertise in solving complex algorithms and technical problem scenarios efficiently.

Dec 2024
𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐞𝐝 𝐒𝐐𝐋 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫
HackerRank

Recognized for advanced SQL skills, including complex queries and relational schema design.

Dec 2024
𝐑𝐄𝐒𝐓 𝐀𝐏𝐈 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧
HackerRank

Proven expertise in designing and integrating RESTful APIs for scalable applications.

Dec 2024
𝐏𝐲𝐭𝐡𝐨𝐧 𝐏𝐫𝐨𝐠𝐫𝐚𝐦𝐦𝐢𝐧𝐠 𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧
HackerRank

Certified for advanced Python programming and application in real-world projects.

Dec 2024
𝐉𝐚𝐯𝐚 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧
HackerRank

Validated skills in object-oriented programming and enterprise application development using Java.

Dec 2024
𝐉𝐚𝐯𝐚𝐒𝐜𝐫𝐢𝐩𝐭 𝐒𝐩𝐞𝐜𝐢𝐚𝐥𝐢𝐬𝐭 𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧
HackerRank

Proficient in JavaScript development for both front-end and back-end applications.

Dec 2024
𝐍𝐨𝐝𝐞.𝐣𝐬 𝐃𝐞𝐯𝐞𝐥𝐨𝐩𝐞𝐫 𝐂𝐞𝐫𝐭𝐢𝐟𝐢𝐜𝐚𝐭𝐢𝐨𝐧
HackerRank

Certified in creating scalable, efficient server-side solutions with Node.js.

Jul 2023
𝐃𝐚𝐭𝐚 𝐀𝐧𝐚𝐥𝐲𝐭𝐢𝐜𝐬 𝐄𝐬𝐬𝐞𝐧𝐭𝐢𝐚𝐥𝐬
CISCO

Mastered the key concepts and tools for analyzing and managing data effectively.

Sep 2023
𝐈𝐧𝐭𝐫𝐨 𝐭𝐨 𝐃𝐚𝐭𝐚 𝐒𝐜𝐢𝐞𝐧𝐜𝐞
CISCO

Introduced to data science tools and methodologies for modern data analysis.

Jul 2024
𝐄𝐧𝐝 𝐏𝐨𝐢𝐧𝐭 𝐒𝐞𝐜𝐮𝐫𝐢𝐭𝐲
CISCO

Certified in securing endpoints and devising effective cybersecurity strategies.

2017 - 2021
COMSATS University Islamabad
Pakistan

COMSATS University offers a B.S. in Software Engineering.

Programming Skills
  • Python
    75%
  • JavaScript
    70%
  • Flutter
    85%
  • Java
    95%
Tool Stack
  • PyCharm
    75%
  • Android Studio
    85%
  • Xcode
    85%
  • Docker
    80%
  • Postman
    80%
  • Jenkins
    70%
  • Appium
    85%
  • Selinium
    85%
Cloud
  • AWS
    70%
  • Google Cloud
    60%
  • Azure
    65%
General Skills
Coding
  • Flutter
    85%
  • Python
    75%
  • Javascript
    70%
  • Java
    90%
  • C#
    75%
Languages
  • English
    70%
  • Urdu/Hindi
    90%
Platforms
  • Windows
    90%
  • Linux
    75%
  • Mac OS
    65%
  • Android
    95%
Knowledge
  • Data science and machine learning
  • Data structures and algorithms
  • SQL and database technologies
  • Project management and development
  • Git and GitHub
  • Linux and Windows management
  • Communication and collaboration
  • Object-Oriented Programming (OOP)
Works
Get in Touch
  • Address: Lahore, Pakistan
  • Freelance: Available
Contact Form

    ×

    Building a Production-Grade N8N Infrastructure on 16GB RAM

    The Challenge

    N8N is a powerful workflow automation platform, but scaling it to handle heavy production workloads while maintaining a reasonable infrastructure budget is challenging. This is the story of how I architected a high-performance N8N deployment that handles complex Python and JavaScript workloads within the constraints of a single 16GB server.

    Architecture Overview

    The infrastructure consists of multiple specialized components working together in a queue-based architecture:

    ┌─────────────────────────────────────────────────────────┐
    │                    Client Requests                       │
    └──────────────────────┬──────────────────────────────────┘
                           │
                           ▼
                  ┌─────────────────┐
                  │  Caddy (Proxy)  │
                  │   SSL/TLS       │
                  └────────┬────────┘
                           │
              ┌────────────┼────────────┐
              │            │            │
              ▼            ▼            ▼
        ┌─────────┐  ┌──────────┐  ┌──────────┐
        │ N8N Main│  │ Webhook  │  │ Webhook  │
        │ UI/API  │  │ Handler  │  │ Handler  │
        └────┬────┘  └──────────┘  └──────────┘
             │
             │ Publishes jobs to
             ▼
        ┌──────────────────┐
        │  Redis Queue     │
        │  (Bull Queue)    │
        └────────┬─────────┘
                 │
           ┌─────┴─────┐
           │           │
           ▼           ▼
       ┌────────┐  ┌────────┐
       │Worker-1│  │Worker-2│
       │  15cc  │  │  15cc  │
       └───┬────┘  └───┬────┘
           │           │
           │ Delegates │
           ▼           ▼
       ┌────────┐  ┌────────┐
       │Task    │  │Task    │
       │Runners │  │Runners │
       │Python  │  │Python  │
       │& JS    │  │& JS    │
       └────────┘  └────────┘

    Key Design Decisions

    1. External Task Runners for Heavy Workloads

    One of the most critical architectural decisions was implementing external task runners. N8N supports executing Python and JavaScript code, but running these in the main Node.js process poses security and performance risks.

    The Solution:

    • Isolated task runner containers built on the official n8nio/runners:latest image
    • Each runner operates in a sandboxed environment
    • Pre-installed heavy dependencies: NumPy, Pandas, yt-dlp, requests

    Benefits:

    • Security: Code execution is isolated from the main process
    • Performance: Python workloads don't block Node.js event loop
    • Stability: Crashed code execution doesn't bring down the worker
    • Resource Control: Each runner has dedicated memory limits (1.5GB)

    2. Dedicated Task Runners Per Worker

    Instead of sharing a pool of task runners, each worker has dedicated task runners. This was a deliberate choice for heavy workload scenarios.

    worker-1 (2.5GB)
      ├── task-runner-worker-1-1 (1.5GB)
      └── task-runner-worker-1-2 (1.5GB)
    
    worker-2 (2.5GB)
      ├── task-runner-worker-2-1 (1.5GB)
      └── task-runner-worker-2-2 (1.5GB)

    Why This Matters:

    • Load Isolation: A saturated worker-1 doesn't starve worker-2 of task runners
    • Debugging: Easy to identify which worker is causing performance issues
    • Scaling: Can scale task runners independently per worker based on actual load
    • Failure Containment: Issues in one worker's task runners don't cascade

    3. Queue-Based Architecture

    The infrastructure uses Bull Queue (Redis-backed) for job distribution:

    Manual Execution (UI) → Main Process → task-runner-main
    Production Jobs → Redis Queue → Workers → task-runner-worker-X
    Webhooks → Webhook Process → Redis Queue → Workers

    This separation ensures:

    • UI remains responsive even under heavy production load
    • Production jobs are distributed across workers
    • Failed jobs can be retried automatically
    • Horizontal scaling is possible

    Memory Optimization Strategy

    Fitting a production-grade setup into 16GB required careful resource allocation:

    Component Count Memory/Instance Total Strategy
    N8N Main 1 4GB 4GB UI/API - needs headroom for workflow editing
    Workers 2 2.5GB 5GB Queue consumers - right-sized for 15 concurrent jobs
    Task Runners (Main) 2 1.5GB 3GB Manual executions - lower load
    Task Runners (Workers) 4 1.5GB 6GB Production - most critical
    Webhooks 2 1GB 2GB Stateless handlers
    Caddy 1 256MB 256MB Lightweight proxy
    Prometheus 1 512MB 512MB Monitoring with 7-day retention
    Total 13 - ~21GB With soft limits & reservations

    How it fits in 16GB:

    • Memory reservations vs limits: Containers reserve minimum memory but can burst to limits
    • Actual usage: ~12-14GB under normal load
    • Headroom: 2-4GB for OS and spikes
    • Log level: Set to warn to reduce I/O overhead

    Docker Compose Configuration Highlights

    Custom N8N Image with FFmpeg

    FROM n8nio/n8n:latest
    
    USER root
    RUN apk add --no-cache ffmpeg
    USER node

    Simple and effective - adds video processing capabilities without bloat.

    Task Runner Configuration

    The task runners use a custom configuration file (n8n-task-runners.json) that defines:

    {
      "task-runners": [
        {
          "runner-type": "javascript",
          "command": "/usr/local/bin/node",
          "args": ["--disallow-code-generation-from-strings"],
          "env-overrides": {
            "NODE_FUNCTION_ALLOW_BUILTIN": "",
            "NODE_FUNCTION_ALLOW_EXTERNAL": ""
          }
        },
        {
          "runner-type": "python",
          "command": "/opt/runners/task-runner-python/.venv/bin/python",
          "env-overrides": {
            "N8N_RUNNERS_EXTERNAL_ALLOW": "numpy,pandas,requests,yt-dlp"
          }
        }
      ]
    }

    This ensures:

    • JavaScript runs with security flags (no eval(), no proto manipulation)
    • Python has explicit whitelist of allowed packages
    • Each runner type has its own resource constraints

    Worker Configuration with Task Runner Support

    Critical environment variables that enable workers to use task runners:

    worker-1:
      environment:
        - EXECUTIONS_MODE=queue
        - N8N_RUNNERS_ENABLED=true
        - N8N_RUNNERS_MODE=external
        - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
        - N8N_RUNNERS_MAX_CONCURRENCY=5

    Without these, workers would fail when encountering Python/JS code nodes.

    Performance Characteristics

    Throughput

    • 30 concurrent workflow executions (2 workers × 15 concurrency)
    • 6 parallel Python/JS tasks (6 task runners total)
    • Unlimited webhook requests (handled by dedicated webhook processes)

    Load Distribution

    Manual/UI Load → task-runner-main (sporadic, low volume)
    Production Load → task-runner-worker-1 & worker-2 (continuous, high volume)
    Webhook Traffic → webhook processes → Queue → Workers

    Resource Utilization Under Load

    • Normal: 60-70% memory usage (~10-11GB)
    • Peak: 85-90% memory usage (~14-15GB)
    • Critical threshold: Set at 90% with alerts

    Monitoring and Observability

    Prometheus Integration

    Each process exposes metrics with unique prefixes:

    • n8n_* - Main process metrics
    • n8n_worker_1_* - Worker 1 metrics
    • n8n_worker_2_* - Worker 2 metrics
    • n8n_webhook_* - Webhook process metrics

    Health Checks

    Every critical component has health checks:

    healthcheck:
      test: ["CMD", "wget", "--spider", "http://localhost:5678/healthz"]
      interval: 30s
      timeout: 5s
      retries: 3

    This ensures:

    • Docker automatically restarts unhealthy containers
    • Load balancers can route traffic away from degraded instances
    • Monitoring systems get real-time health status

    Common Pitfalls Avoided

    1. Task Runner Connection Mismatch

    Problem: Using docker-compose replicas creates containers like worker_1, worker_2, but there's no service named just worker.

    Wrong:

    task-runner-worker:
      environment:
        - N8N_RUNNERS_TASK_BROKER_URI=http://worker:5679  # Fails!

    Correct:

    task-runner-worker-1:
      environment:
        - N8N_RUNNERS_TASK_BROKER_URI=http://worker-1:5679  # Works!

    2. Workers Without Task Runner Configuration

    Workers must explicitly enable task runners:

    - N8N_RUNNERS_ENABLED=true
    - N8N_RUNNERS_MODE=external

    Without these, Python/JS nodes will fail silently.

    3. Insufficient Memory for Task Runners

    Initial allocation of 512MB per task runner caused OOM kills with Pandas operations. Increased to 1.5GB to handle:

    • Large DataFrame operations
    • NumPy array computations
    • Video processing with yt-dlp

    Deployment and Scaling

    Initial Deployment

    # Build custom images
    docker-compose build --no-cache
    
    # Start infrastructure
    docker-compose up -d
    
    # Verify all services
    docker-compose ps
    docker stats

    Horizontal Scaling (Future)

    The architecture supports scaling to multiple servers:

    1. Move Redis to dedicated server
    2. Add worker servers pointing to central Redis
    3. Scale task runners based on worker load
    4. Use external load balancer instead of Caddy

    Vertical Scaling

    If upgrading to 32GB RAM:

    • Increase worker count: 2 → 4
    • Add task runners per worker: 2 → 3
    • Increase concurrency: 15 → 20 per worker
    • Expected throughput: 80 concurrent executions

    Security Considerations

    Network Isolation

    All services communicate over internal Docker network. Only Caddy exposes ports externally.

    Code Execution Sandboxing

    • Task runners use security flags: --disallow-code-generation-from-strings
    • Python external packages are whitelisted
    • No access to host filesystem from task runners

    Secrets Management

    Sensitive values stored in .env file:

    N8N_ENCRYPTION_KEY=your-encryption-key
    N8N_RUNNERS_AUTH_TOKEN=your-auth-token

    Never committed to version control.

    Lessons Learned

    1. Memory Limits vs Reservations

    Using both provides the best of both worlds:

    • Reservation: Guaranteed minimum (prevents resource starvation)
    • Limit: Maximum allowed (prevents runaway processes)

    2. Log Levels Matter

    Changing from info to warn reduced disk I/O by ~40% and saved ~200MB memory.

    3. Health Check Intervals

    Initial 10s intervals created unnecessary overhead. 30s is sufficient for production.

    4. Task Runner Auto-Shutdown

    Setting N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=300 allows idle runners to shut down, freeing memory.

    Conclusion

    Building a production-grade N8N infrastructure on limited resources requires careful architectural decisions and resource optimization. The key takeaways:

    1. External task runners are essential for heavy Python/JS workloads
    2. Dedicated task runners per worker provide better isolation and debugging
    3. Queue-based architecture enables scaling and resilience
    4. Memory optimization through limits, reservations, and right-sizing
    5. Comprehensive monitoring is critical for production stability

    This infrastructure successfully handles production workloads processing thousands of workflow executions daily, with complex data transformations using Pandas, video downloads with yt-dlp, and custom JavaScript logic—all within a 16GB constraint.

    The architecture is battle-tested, cost-effective, and ready to scale when needed.

    Query parameters will appear here.