
Lead II - Data Engineering -Python - Databricks, PySpark, Python
at Global digital transformation solutions provider.
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Skill Examples:
- Proficiency in SQL Python or other programming languages used for data manipulation.
- Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
- Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
- Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
- Experience in performance tuning.
- Experience in data warehouse design and cost improvements.
- Apply and optimize data models for efficient storage retrieval and processing of large datasets.
- Communicate and explain design/development aspects to customers.
- Estimate time and resource requirements for developing/debugging features/components.
- Participate in RFP responses and solutioning.
- Mentor team members and guide them in relevant upskilling and certification.
Knowledge Examples:
- Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
- Proficient in SQL for analytics and windowing functions.
- Understanding of data schemas and models.
- Familiarity with domain-related data.
- Knowledge of data warehouse optimization techniques.
- Understanding of data security concepts.
- Awareness of patterns frameworks and automation practices.
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026
Project Overview:
Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.
The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.
Design, build, and maintain scalable data pipelines using Databricks and PySpark.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.
Ensure data quality, performance, and reliability across data workflows.
Participate in code reviews, data architecture discussions, and performance optimization initiatives.
Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.
Key Skills:
Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).
Excellent problem-solving, communication, and collaboration skills.
Skills: Databricks, Pyspark & Python, Sql, Aws Services
Must-Haves
Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)
Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
******
Notice period - Immediate to 15 days
Location: Bangalore

Similar jobs
Responsibilities:
- Research industry-related topics to develop original content ideas.
- Write clear, concise, and compelling blog posts, articles, social media posts, newsletters, and website content.
- Assist in editing and proofreading content for accuracy and consistency.
- Collaborate with the marketing team to align content with brand voice and campaign goals.
- Optimize content for SEO to improve visibility and organic reach.
- Support content planning and brainstorming sessions.
- Learn and adapt to feedback to improve writing skills and output quality.
Designation - Business Development Associate
Mode - Work from office (On-site)
Experience - 6 months to 4 years
Qualification - Graduation, any degree having an technical edtech exp of selling certification programs
CTC : Between 4 - 7.3 LPA (Including Incentives)
Joining - Immediate Joiners/15 days
Location - HSR Layout
Shift - Day Shift
Interviews will be happening in a walk mode in office premises.
Note : To know more about the interview procedure, and book an appointment, kindly call to this number: (HR - Spoorthi)
Job Description
- Conversion of leads received through various marketing channels.
- Preparing short-term and long-term sales plan towards reaching the assigned goals.
- Proactively identifying cross-selling/up-selling opportunities with the existing customers.
- Identifying references through the existing customer base to increase the sales pipeline.
- Customer Relationship Management.
- 1 to 3 years experience in copy writing,
- excellent communication
- Portfolio is mandatory
- linked in mandatory
Role: Java Developer
Experience: 4 - 7 years
Location: Anna salai, Chennai
Interview Mode: Face to Face
Looking for candidates who can join within 30 days
Key responsibilities:
- Must worked on Spring Boot as an individual contributor.
- Develop a distributed and scalable solution for high-performance financial solutions
- Design, code, test, debug, and document programs as well as support activities for the corporate systems architecture
- Develop tools for performance tracking, monitoring, and reporting on the suite of server-side applications
- Perform and manage stages of the SLDC and participate in the systems review with Project Lead/Manager
- Should have knowledge on Cloud applications preferably on AWS
- Devops with Jenkins, Shellscripting
- Must have experience / knowledge in Cluster Management Frameworks , Spark, Kafka,
- Elastic Search, Docker and database, build-and-test (preferred).
- Demonstrate skills in problem-solving and decision-making
- Experience with Test Driven Development using TestNG/Junit testing framework
Required Skills:
- Minimum of 3+ years of experience with server-side core java development
- Hand-on experience in Micro Services with Spring Boot and use of tools like Sonar Qube, Jenkins, AWS.
- Test methodologies and testing tools e.g. JUnit.
- Shell scripting to handle basic scripting
- Able to work independently and produce high-quality solutions.
- Excellent team player and strong interpersonal skills.
- Participating in code reviews
- Knowledge in cloud and DevOps will be plus
Strong foundation and knowledge of frontend web technologies: HTML/CSS/Javascript
AngularJS/Angular/ReactJS/VueJS frameworks and libraries for the frontend
Experienced using NodeJS platform and techniques for handling asynchronous calls (Promise, Observables).
Real time experience working on any cloud tech(AWS)
Experience with Laravel is most welcomed .
Experience/Knowledge of page optimization for reducing load time and SEO
Experience/knowledge in source code versioning control using GIT, such as GITHUB will be added advantage
Knowledge of SDLC & Agile software development will be added advantage
Able to work independently as well as within a team
Open to learning new technologies and trends.
This role is Temporary full time.
Must to have:
- Minimum 5 years of QA automation experience
- Solid knowledge of Mobile application automation i
- Experience in Selenium and Appium tools and scripting language
About the Internship:Selected intern's day-to-day responsibilities include:
1. Coordinating with the founders and setting up meetings with the clients
2. Pitching products to the target customers and onboarding them
3. Doing client management
Only those candidates can apply who: are available for full time (in-office) internship for duration of 3 months and have relevant skills and interests
** Women willing to start/restart their career can also apply.*
Freshers looking for jobs may also apply.
Stipend:INR ₹10000 /Month
Other perks:Certificate, Letter of recommendation, Job offer (On successful conversion to a permanent employee, the candidate can expect a salary of Rs. 3 to 4 Lac/annum).
Skills required:English Proficiency (Spoken) and Hindi Proficiency (Spoken)










