Venkatraman Cloud Resume
HELLO, SO GOOD TO SEE YOU!
VENKATRAMAN
CLOUD PRACTITIONER
I BUILD RELIABLE AND SCALABLE
CLOUD ENVIRONMENTS.
Description of image
Description of image
A bit about me!
Passionate and detail-oriented AWS Cloud Practitioner with direct, hands-on experience through the AWS re/Start Program. Expert in architecting scalable, secure cloud environments and deploying automation using EC2, S3, Lambda, IAM, and VPC. Adept at troubleshooting, cost optimization, and DevOps concepts while connecting technical insights with real business needs. Driven to deliver reliable, high-impact cloud solutions for modern teams and organizations.
Visitor Tracking — Cloud Resume

Visitor Tracking

How it works (technical)
This site uses a simple, robust front-end + serverless backend pattern to count visitors. Below are the components, recommended data model, and example flow.
Description of image
Frontend:
Stable visitor id - the front-end creates or reuses a stable visitor_id stored in localStorage (e.g. key venkat_visitor_id_v1). Use crypto.randomUUID() if available.

API calls - on page load the front-end optionally POSTs a small payload to /log (visitor_id, timestamp, userAgent) and then GETs odede>/count to display the current number.
Example JS :
// Set up API URLs const POST_LOG_URL = 'https://YOUR_API_GATEWAY/log'; const GET_COUNT_URL = 'https://YOUR_API_GATEWAY/count'; const id = localStorage.getItem('venkat_visitor_id_v1') || crypto.randomUUID(); localStorage.setItem('venkat_visitor_id_v1', id); // POST (best-effort) fetch(POST_LOG_URL, { method: 'POST', headers: {'Content-Type':'application/json'}, body: JSON.stringify({visitor_id:id, ts:new Date().toISOString()}) }); // GET count fetch(GET_COUNT_URL) .then(r => r.json()) .then(data => { document.getElementById('visitorCount').textContent = data.count; });
Backend:
  • API Gateway: Two endpoints — POST /log and GET /count. Enable CORS for your static site origin.
  • AWS Lambda: small Lambda function (Python or Node) that handles both endpoints.
  • Data model (DynamoDB):
    • visitors table with visitor_id as PK and last_seen timestamp (optional TTL).
    • counters table or a single item to keep an atomic counter (use DynamoDB UpdateItem with ADD to increment safely).
  • Counting logic: either track unique visitors by inserting visitor_id if missing and increment the counter once, or increment on each page view depending on required metric.
Example JS :
// POST /log body = parse(event.body) if not visitors_table.contains(body.visitor_id): visitors_table.put({visitor_id: body.visitor_id, last_seen: now}) counters_table.update({key:'page_visitors'}, ADD: {count:1}) else: visitors_table.update(body.visitor_id, {last_seen: now}) // GET /count count = counters_table.get('page_visitors').count return {count: count}

Deployment Notes:

  • Hosting - The static site is hosted on S3 or any static host. Serve via CloudFront for global performance and HTTPS recommended.
  • Caching - Use short Cache-Control or bypass cache for the count endpoint or add Cache-Control headers carefully so visitors see near-real-time updates.
  • Security - Enable CORS only for your domain, validate inputs in Lambda, and consider rate-limiting bot-filtering if
Project Portfolio - Single Section View Projects Section
PROJECTS
File Handler CLI App
Developed a command line interface tool for efficient file management and automation.
Credit Risk Analyzer
Built a predictive model to assess credit risk using logistic regression and decision trees.
RSVP Movies SQL Data Analytics
Analyzed movie rental data using complex SQL queries to identify customer trends.
HR Analytics Mentorship Project
Mentored in HR data analytics to improve talent retention and productivity strategies.
Telecom Churn Prediction
Developed machine learning model to predict customer churn in telecom industry.
Airbnb NYC Post-COVID Strategy
Designed strategic recommendations for Airbnb to boost bookings post-pandemic.
File Handler CLI App — Project

File Handler CLI App

A command-line interface (CLI) application designed to manage product-related data using CRUD operations. Built entirely in Python, this app allows persistent management of product files organized in a structured directory format. Ideal for offline inventory or catalog systems.

Overview
  • Goal: Create a simple yet robust CLI tool to manage product details and sales records.
  • Handles Create, Read, Update, and Delete operations via menu-based navigation.
  • Stores all data persistently in structured CSV and text files.
Features
  • Modular Design — Functions like create(), update(), delete() encapsulate logic for clarity.
  • Data Persistence — Files auto-save before exit to ensure no data loss.
  • Error Handling — Friendly messages for file issues and invalid inputs.
  • Input Validation — Prevents invalid choices or missing fields.
How to Run
git clone https://github.com/Venkat8062/file-handler-cli-app.git
cd file-handler-cli-app
python main.py /path/to/main_folder
        
View on GitHub
Future Enhancements
  • Add JSON export/import options.
  • Integrate SQLite for hybrid local storage.
  • Include automatic backup and versioning.
Credit Risk Analyzer

Credit Risk Analyzer 📊

A comprehensive data exploration and analysis project focused on identifying the factors influencing loan default risk using real-world financial data. Includes cleaning, feature engineering, EDA, correlation analysis, and Power BI readiness for dashboarding.

Project Overview
  • Goal: Predict and explain loan default patterns.
  • Dataset: 300K+ loan applications and historical records.
  • Target variable: TARGET (1 = default, 0 = non-default).
Key Techniques Used
  • Data Cleaning: Dropped columns with >45% missing values, imputed median/mode values.
  • Feature Engineering: Converted days to years, created annuity bands, category encoding.
  • EDA: Countplots, histograms, boxplots comparing defaulters vs non-defaulters.
  • Statistical Testing: KS Test found 38+ features with distribution shifts.
  • Correlation Analysis: Detected 28 pairs with correlation coefficient > 0.7.
Insights
  • Males have higher default rates than females.
  • Pensioners and highly educated borrowers show better repayment patterns.
  • High annuity and multiple loans strongly indicate defaults.
  • Loan amount strongly correlates with goods price.
Tools & Libraries
  • Python (pandas, numpy, seaborn, matplotlib, scipy)
  • Power BI
  • SQL
Future Enhancements
  • Build predictive machine learning models (Logistic Regression, Random Forest).
  • Apply feature interpretability approaches (SHAP, RFE).
  • Integrate cloud-based live dashboards.
RSVP Movies SQL Data Analytics

RSVP Movies SQL Data Analytics 📊

End-to-end SQL-based analytics project for RSVP Movies to prepare their first global film release in 2023. Explored three years of movie data to generate actionable insights and recommendations.

Project Objectives
  • Analyze historical movie data to uncover key trends and patterns.
  • Support business decisions for launching a globally targeted film in 2023.
  • Deliver actionable recommendations based on data-driven insights.
Dataset & Structure
  • Multiple tables capturing releases, genres, box office performance, audience demographics (3 years).
  • Main files:
    • RSVP_Movies_Analysis.sql – SQL analysis scripts
    • Executive_Summary.pdf – Key insights & recommendations
Techniques Used
  • SQL joins, aggregations, filtering, subqueries, and window functions.
  • Data validation and quality checks at each stage.
  • Insights visualized in Power BI dashboards.
Key Insights
  • Top-performing genres and audience segments identified for global appeal.
  • Box office trends and release strategies analyzed.
  • Successful marketing and distribution channels highlighted.
HR Analytics Mentorship Project

HR Analytics Mentorship Project

A data-driven mentorship project exploring how HR analytics can improve employee engagement, retention, and strategic decisions—delivered with storytelling and actionable insights.
Project Overview
  • Goal: Demystify HR Analytics and show its impact on workforce strategy.
  • Real Impact: Solved HR challenges like attrition and employee engagement.
  • Mentorship Lessons: Practical guidance gained through hands-on analytics.
Key Techniques & Applications
  • Data Analysis: Recruitment optimization, predictive modeling, KPI tracking.
  • Addressed barriers: Improved data quality, upskilled teams, overcame resistance.
  • Recommendations: Built KPIs and analytics-driven solutions for HR teams.
Tools & Skills
  • Python (pandas, numpy, seaborn, matplotlib)
  • SQL
  • Power BI / Tableau, Excel, Canva
Python Functions Used
  • pd.read_csv(), df.head(), df.shape
  • df.groupby().agg(), value_counts(), mean()
  • train_test_split(), LogisticRegression(), classification_report(), confusion_matrix()
Key Insights & Recommendations
  • Predictive analytics reduces turnover by identifying at-risk employees early.
  • Engagement strategies measurably improve retention and satisfaction.
  • Invest in upskilling HR teams and pilot analytics projects.
  • Promote curiosity: explore data, ask questions, learn continuously.
Telecom Churn Prediction Project

Telecom Churn Prediction Project

End-to-end churn analytics and prediction project for telecom customer retention. Explored, engineered, and modeled churn using over 100 telecom variables and identified key predictors.
Key Highlights
  • Identified high-value customers using average recharge amount.
  • Cleaned and preprocessed ~100+ features from telecom data.
  • Engineered churn label based on inactivity in month 9.
  • Handled multicollinearity using VIF and correlation analysis.
  • Feature selection with Random Forest and RFE.
  • Trained multiple models – Logistic Regression, Random Forest (with SMOTE).
  • Achieved ~60% accuracy after class imbalance handling.
  • Top 10 churn predictors identified; roam_ic_mou_8 most influential.
Project Structure
  • telecom_churn_data.csv  — Input dataset
  • telecom_churn_prediction.ipynb — Full analysis & modeling
  • README.md — Project documentation
Data Preparation & Feature Engineering
  • Removed constant & irrelevant features, parsed dates, derived features like days_since_last_rech.
  • Filtered top 30% high-value customers and created binary churn label.
  • Handled missing values, scaled features, removed low-variance & highly correlated columns.
  • Applied RFE with Random Forest for feature selection.
Model Building
  • Random Forest (Baseline): ~94% accuracy but overfit due to class imbalance.
  • Logistic Regression (Balanced): ~82% accuracy; class imbalance handled.
  • Random Forest + SMOTE: ~60% accuracy; best for churn detection and feature importance analysis.
Airbnb NYC Post-COVID Strategy

Airbnb NYC Post-COVID Strategy

Supported Airbnb’s executive team in developing a post-lockdown revival strategy using 2019 NYC listing data and Tableau dashboards. Combined data analysis, visualization, and storytelling to guide executive decisions.
Objective
  • Analyze listing data across neighborhoods, room types, pricing, and availability to uncover occupancy and revenue patterns.
  • Identify high-density areas (Manhattan, Brooklyn) with premium returns and potential oversupply risks.
  • Provide actionable insights for post-COVID marketing, pricing, and operational strategy.
Key Analysis & Insights
  • Correlated room type preferences with neighborhood popularity to optimize pricing strategies by borough.
  • Segmented listings based on availability and flagged underutilized properties for dynamic pricing.
  • Created Tableau dashboards showing geo-distribution, host patterns, optimal zones for reopening marketing campaigns.
  • Delivered recommendations to guide Airbnb’s revenue strategy in a recovering travel market.
Tools & Skills
  • Tableau – Interactive dashboards and data visualization.
  • Python (pandas, matplotlib) – Data cleaning, analysis, and visualization.
  • Data Cleaning – Handling missing values, outlier detection, and preprocessing.
  • Visual Storytelling – Clear presentation of insights for decision-making.
Deliverables
  • Tableau storytelling dashboard
  • Executive insights report
Education Section
EDUCATION
Graduate AWS re/Start
CULTUS EDUCATION
Jun 2025 – Oct 2025 •
GPA – 100%
Graduate in Data Science
UPGRAD
Jun 2024 – Feb 2025
GPA – 90%
MBA
in Entrepreneurship
TIPS Global Institute
Aug 2016 – Aug 2021
GPA – 80%
M.Sc.
in Building & Interior Design
TIPS Global Institute
Aug 2016 – Aug 2021
GPA – 80%
Certifications Section
CERTIFICATIONS
Six Sigma Yellow Belt (6sigmastudy)
Learned fundamental Six Sigma concepts and process improvement methodologies to enhance business operations.
Lean Six Sigma White Belt
Introduced to Lean principles and basic problem-solving techniques for operational efficiency.
SQL for Data Science
Gained proficiency in querying, manipulating, and analyzing structured data using SQL.
Power BI Visualization
Developed skills in designing interactive dashboards and visual reports to derive business insights.
Python for Data Analysis
Applied Python libraries like Pandas and NumPy to clean, manipulate, and analyze datasets efficiently.
Power BI & Dashboarding
Created end-to-end dashboards to track KPIs and provide actionable insights for decision-making.
Work Experience Section

WORK EXPERIENCE

Description of image
Work Experience Section

HR ANALYST

NSP KNITTING MILLS LIMITED

MAR 2024 – OCT 2024

TIRUPPUR, INDIA

  • Led data-driven management of employee attendance, payroll, and retention analytics for a workforce of 50.
  • Streamlined attendance tracking and payroll processes using automated Excel workflows (pivot tables, VLOOKUP, macros), boosting accuracy and efficiency.
  • Analyzed machine productivity and operations metrics, reducing downtime and increasing manufacturing productivity by 8%.
  • Visualized employee retention patterns, collaborating with leadership to improve engagement strategies and achieve 80% retention improvement.
  • Designed compensation models optimizing salary structures, resulting in a 10% increase in basic earnings while aligning with business objectives.
  • Created dashboards and reports using Power BI and Excel to communicate workforce and operational KPIs for data-driven decisions.
  • Translated complex HR and operations data into actionable recommendations for cross-functional teams.

JR ANALYST

NSP KNITTING MILLS LIMITED

FEB 2022 – FEB 2024

TIRUPPUR, INDIA

  • Supported and independently handled HR functions including payroll, attendance, and retention analytics during supervisor absences and night shifts.
  • Ensured effective employee communication, resolving issues quickly to maintain operational continuity and compliance.
  • Maintained 99% attendance accuracy and reduced payroll errors by 15% through Excel formulas and data validation.
  • Contributed to a 40% increase in employee retention by implementing analytics-backed engagement approaches.
  • Provided seamless HR operations continuity during peak periods, demonstrating adaptability and problem-solving skills.
Contact & Feedback | Venkatraman S
Let's connect — Your feedback and collaboration are welcome.