3+ PySpark Jobs in Indore | PySpark Job openings in Indore
Apply to 3+ PySpark Jobs in Indore on CutShort.io. Explore the latest PySpark Job opportunities across top companies like Google, Amazon & Adobe.


Role descriptions / Expectations from the Role
· 6-7 years of IT development experience with min 3+ years hands-on experience in Snowflake
· Strong experience in building/designing the data warehouse or data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers.
· Strong experience with building productionized data ingestion and data pipelines in Snowflake
· Good knowledge of Snowflake's architecture, features likie Zero-Copy Cloning, Time Travel, and performance tuning capabilities
· Should have good exp on Snowflake RBAC and data security.
· Strong experience in Snowflake features including new snowflake features.
· Should have good experience in Python/Pyspark.
· Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF)
· Should have experience/knowledge in orchestration and scheduling tools experience like Airflow
· Should have good understanding on ETL or ELT processes and ETL tools.

We are looking for a skilled and passionate Data Engineers with a strong foundation in Python programming and hands-on experience working with APIs, AWS cloud, and modern development practices. The ideal candidate will have a keen interest in building scalable backend systems and working with big data tools like PySpark.
Key Responsibilities:
- Write clean, scalable, and efficient Python code.
- Work with Python frameworks such as PySpark for data processing.
- Design, develop, update, and maintain APIs (RESTful).
- Deploy and manage code using GitHub CI/CD pipelines.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Work on AWS cloud services for application deployment and infrastructure.
- Basic database design and interaction with MySQL or DynamoDB.
- Debugging and troubleshooting application issues and performance bottlenecks.
Required Skills & Qualifications:
- 4+ years of hands-on experience with Python development.
- Proficient in Python basics with a strong problem-solving approach.
- Experience with AWS Cloud services (EC2, Lambda, S3, etc.).
- Good understanding of API development and integration.
- Knowledge of GitHub and CI/CD workflows.
- Experience in working with PySpark or similar big data frameworks.
- Basic knowledge of MySQL or DynamoDB.
- Excellent communication skills and a team-oriented mindset.
Nice to Have:
- Experience in containerization (Docker/Kubernetes).
- Familiarity with Agile/Scrum methodologies.
- Does analytics to extract insights from raw historical data of the organization.
- Generates usable training dataset for any/all MV projects with the help of Annotators, if needed.
- Analyses user trends, and identifies their biggest bottlenecks in Hammoq Workflow.
- Tests the short/long term impact of productized MV models on those trends.
- Skills - Numpy, Pandas, SPARK, APACHE SPARK, PYSPARK, ETL mandatory.