📚 Course Curriculum

27 comprehensive notebooks including data modelling foundations across 5 weeks + advanced modules

📊 Course Overview

Duration

4-6 Weeks

Complete journey from fundamentals to production deployment

Notebooks

27 Total

Hands-on practice with real-world patterns

Time Investment

30-45 Hours

Comprehensive learning experience

Level

All Levels

From beginners to advanced practitioners

Week 1: Databricks Fundamentals

5 notebooks | ~8-10 hours

Master the Databricks platform, Unity Catalog governance, cluster management, and Spark optimization techniques.

01_databricks_fundamentals.py

Platform architecture, runtime environments, workspace organization, and best practices for production data engineering.

Platform Architecture Workspace Management Best Practices

02_unity_catalog_deep_dive.py

Data governance fundamentals, three-level namespace (catalog.schema.table), permissions, and secure data sharing.

Unity Catalog Data Governance Permissions RBAC

03_cluster_management.py

Autoscaling strategies, instance types, cost optimization, and cluster configuration for different workloads.

Cluster Config Autoscaling Cost Optimization

04_spark_on_databricks.py

Distributed computing, DataFrame operations, RDD transformations, and performance tuning for large-scale data processing.

Spark DataFrames RDD Operations Performance Tuning

05_delta_lake_concepts_explained.py

Delta Lake fundamentals, ACID properties, Delta transaction logs, and how ACID properties map to Delta log transactions.

Delta Lakes ACID Delta transaction logs

Foundations: Data Modelling Patterns

4 notebooks | ~6-8 hours

Essential data modeling concepts for building maintainable, performant data architectures including medallion patterns, dimensional modeling, and slowly changing dimensions.

01_introduction_to_data_modeling.py

Fundamentals of data modeling: organizing data for consistency, accessibility, and performance. Learn why modeling matters and key design principles.

Data Modeling Basics Design Principles Best Practices

02_medallion_architecture.py

Bronze, Silver, Gold layers: progressive data refinement pattern for data lakes. Understand when to use each layer and how they work together.

Medallion Architecture Data Layers Lake Patterns

03_dimensional_modeling.py

Star and snowflake schemas, fact and dimension tables for analytics. Learn to design data warehouses optimized for business intelligence.

Star Schema Facts & Dimensions Analytics Design

04_scd_and_delta_patterns.py

Slowly Changing Dimensions (SCD Types 1, 2, 3) and Delta Lake implementation patterns. Handle historical data changes in production systems.

SCD Types Historical Data Delta Patterns

Week 2: Data Ingestion Mastery

5 notebooks | ~8-10 hours

Production-grade ingestion patterns from files, APIs, databases, and cloud storage with error handling and retry logic.

06_file_ingestion.py

CSV, JSON, Parquet ingestion with explicit schemas, data quality validation, and Delta Lake integration.

CSV/JSON/Parquet Schema Enforcement Data Validation

07_api_ingest.py

REST API integration, authentication patterns, retry logic, rate limiting, and error handling for production systems.

REST APIs Authentication Retry Logic

08_database_ingest.py

JDBC connections, incremental loading, change data capture (CDC), and database integration patterns.

JDBC Incremental Load CDC Patterns

09_s3_ingest.py

Cloud storage patterns, partitioning strategies, data lakehouse architecture, and efficient file organization.

S3/Cloud Storage Partitioning Lakehouse

10_ingestion_concepts_explained.py

Batch and streaming ingestion, inference and explicit schema handling, error handling patterns, idempotent ingestion, and incremental loading.

Ingestion Schema/Error Handling Incremental Loading

Week 3: Advanced Transformations

4 notebooks | ~6-8 hours

Complex Spark operations including window functions, advanced analytics, and medallion architecture transformations.

11_simple_transformations.py

Data cleaning, type conversions, business logic implementation, and Bronze to Silver layer transformations.

Data Cleaning Type Conversions Business Logic

12_window_transformations.py

Ranking functions, moving averages, lead/lag operations, and time-series analytics with window functions.

Window Functions Ranking Time-Series

13_aggregations.py

Complex grouping operations, CUBE/ROLLUP, statistical functions, and Silver to Gold layer transformations.

Aggregations CUBE/ROLLUP Statistics

14_transformation_concepts_explained.py

Lazy evaluation, narrow and wide transformations, partitioning and shuffling, Catalyst optimizer, and caching and persistence strategies.

Transformations Partitioning/Shuffling Caching

Week 4: End-to-End Workflows

3 notebooks | ~4-6 hours

Build complete production pipelines from data ingestion through transformations to final insights.

15_file_to_aggregation.py

Complete ETL pipeline from file ingestion through all medallion layers (Bronze → Silver → Gold) with monitoring.

Full Pipeline ETL Monitoring

16_api_to_aggregation.py

Real-time data processing pipeline from API ingestion to final insights with error handling and recovery.

Real-Time API Pipeline Error Handling

17_pipeline_patterns_explained.py

Medallion Architecture, idempotency and exactly-once processing, data quality and validation patterns, monitoring and observability strategies, and checkpointing and recovery patterns.

Medallion Architecture Data Quality Monitoring Recovery Patterns

Week 5: Production Deployment

4 notebooks | ~8-10 hours

Professional Python packaging with Poetry, job orchestration, and production deployment with real stock market data.

18_job_orchestration_concepts_explained.py

Comprehensive guide to DAG concepts, retry logic, scheduling, and job orchestration fundamentals with both UI and SDK approaches.

Job Orchestration DAGs Scheduling UI & SDK

19_create_multi_task_ingestion_job.py

Multi-task job orchestration with parallel execution, dependency management, and real-time monitoring.

Multi-Task Jobs Parallel Execution Monitoring

20_wheel_creation_with_poetry.py

Professional Python packaging with Poetry, building reusable modules, testing, and deploying wheels to Databricks.

Poetry Wheel Packages Testing

21_stock_market_wheel_deployment.py

Production capstone project with real stock market data (AAPL, GOOGL, MSFT, AMZN, NVDA) using Yahoo Finance. Complete medallion architecture pipeline with financial calculations and deployment automation.

Real Market Data Yahoo Finance Production Pipeline Financial Analytics

Advanced: Databricks Apps

2 notebooks | ~4-6 hours

Build interactive data applications with Streamlit, including a production stock market analyzer querying gold layer tables.

01_databricks_apps_guide.py

Complete guide to building data applications on Databricks: architecture, Streamlit fundamentals, Unity Catalog integration, and deployment strategies.

Databricks Apps Streamlit App Architecture Deployment

02_stock_market_analyzer_app.py

Production Streamlit application with interactive stock market analysis: market overview, risk-return analysis, detailed stock analysis, and portfolio simulator using gold layer data.

Interactive Dashboard Financial Analysis Portfolio Sim Production App

🎓 What You'll Learn

Databricks Platform
Master workspace, Unity Catalog, and cluster management

📊

Data Engineering
Build production pipelines with medallion architecture

🔄

ETL/ELT Patterns
Implement ingestion, transformation, and loading workflows

🚀

Production Deployment
Package, orchestrate, and deploy professional solutions

📦

Python Packaging
Create reusable wheel packages with Poetry

📱

Data Applications
Build interactive dashboards with Streamlit

Ready to Start?

Choose your path and begin your Databricks journey

Get Started as Data Engineer → Deploy Infrastructure →