
Lead II - Data Engineering -Python - Databricks, PySpark, Python
at Global digital transformation solutions provider.
Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Skill Examples:
- Proficiency in SQL Python or other programming languages used for data manipulation.
- Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
- Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
- Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
- Experience in performance tuning.
- Experience in data warehouse design and cost improvements.
- Apply and optimize data models for efficient storage retrieval and processing of large datasets.
- Communicate and explain design/development aspects to customers.
- Estimate time and resource requirements for developing/debugging features/components.
- Participate in RFP responses and solutioning.
- Mentor team members and guide them in relevant upskilling and certification.
Knowledge Examples:
- Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
- Proficient in SQL for analytics and windowing functions.
- Understanding of data schemas and models.
- Familiarity with domain-related data.
- Knowledge of data warehouse optimization techniques.
- Understanding of data security concepts.
- Awareness of patterns frameworks and automation practices.
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026
Project Overview:
Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.
The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.
Design, build, and maintain scalable data pipelines using Databricks and PySpark.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.
Ensure data quality, performance, and reliability across data workflows.
Participate in code reviews, data architecture discussions, and performance optimization initiatives.
Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.
Key Skills:
Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).
Excellent problem-solving, communication, and collaboration skills.
Skills: Databricks, Pyspark & Python, Sql, Aws Services
Must-Haves
Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)
Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
******
Notice period - Immediate to 15 days
Location: Bangalore

Similar jobs
Ø Develop and implement content for any SIEM platforms, including Google Chronicle, Sumologic, and Splunk.
Ø Configure and fine-tune use cases, correlation, grouping, and logical rules in SIEM tools.
Ø Integrate new log sources, assets with SIEM, and incremental threat intelligence feeds.
Ø Draft, test, and deploy YARA and Chronicle Backstory rules.
Ø Curate and update Incident Response Guides.
Ø Customize SIGMA rules and maintain familiarity with the MITRE ATT&CK Framework.
Ø Develop threat detection content for various datasets such as Proxy, VPN, Firewall, and DLP.
Ø Aid in process development/improvement for Security Operations.
Ø Recognize and propose new security controls to bridge existing gaps.
Ø Chronicle Backstory/ ELK Stack/ YARA / CrowdStrike rules experience is a plus.
Required Skills:
· 6+ years of being a practitioner in data engineering or a related field.
· Proficiency in programming skills in Python
· Experience with data processing frameworks like Apache Spark or Hadoop.
· Experience working on Snowflake and Databricks.
· Familiarity with cloud platforms (AWS, Azure) and their data services.
· Experience with data warehousing concepts and technologies.
· Experience with message queues and streaming platforms (e.g., Kafka).
· Excellent communication and collaboration skills.
· Ability to work independently and as part of a geographically distributed team.
Responsible for handling Healthcare Recruiter from US based clients.
Responsible for full-cycle recruiting: interview, offer, negotiation and closed candidates for assigned requisitions.
Sourcing from Job Portal (Indeed), Making job postings on the Job Portal.
Must be having capability to work with a team or individually.
Maintain close contact with candidates/Vendors/ recruiters/offshore team.
Publish progress report to management every week.
Candidate should have worked on Healthcare requirements – Medical Assistant, CNA, RN, Nurse Practitioner, Physician.
- 2+ years of experience in using Java/J2EE tech stacks
- Good understanding of data structures and algorithms.
- Excellent analytical and problem solving skills.
- Ability to work in a fast paced internet start-up environment.
- Experience in technical mentorship/coaching is highly desirable.
- Understanding of AI/ML algorithms is a plus.
We are the largest online legal service providers in India. Our business is technology driven and our app is already servicing customers across India and abroad. We have offices in Chennai and Bangalore.
Our dev team comprises of seasoned and experienced developers along with Senior Resources for mentoring. We are looking forward to expand our team.
Job brief
Do you like writing mean and clean Ruby code? Come join our team and help us build amazing things.
We’re looking for someone with a passion for programming and for writing beautiful code. You will join a team of exceptional developers working in a fast-paced environment to deliver world-class software.
Responsibilities
- Write clean, maintainable and efficient code.
- Design robust, scalable and secure features.
- Drive continuous adoption and integration of relevant new technologies into design.
We are looking for a react.js developer to join our front-end development team. In this role, you will be responsible for developing and implementing user interface components using React.js concepts and workflows such as Redux, Flux, and Webpack. You will also be responsible for profiling and improving front-end performance and documenting our front-end codebase.
- Reviewing application requirements and interface designs.
- Identifying web-based user interactions.
- Developing and implementing highly responsive user interface components using react concepts.
- Writing application interface codes using JavaScript following react.js workflows.
- Troubleshooting interface software and debugging application codes.
- Developing and implementing front-end architecture to support user interface concepts.
- Monitoring and improving front-end performance.
- Documenting application changes and developing updates.
This person MUST have:
- B.E Computer Science or equivalent
- 5 years experience with the Django framework
- Experience with building APIs (REST or GraphQL)
- Strong Troubleshooting and debugging skills
- React.js knowledge would be an added bonus
- Understanding on how to use a database like Postgres (prefered choice), SQLite, MongoDB, MySQL.
- Sound knowledge of object-oriented design and analysis.
- A strong passion for writing simple, clean and efficient code.
- Proficient understanding of code versioning tools Git.
- Strong communication skills.
Experience:
- Min 5 year experience
- Startup experience is a must.
Location:
- Remote developer
Timings:
- 40 hours a week but with 4 hours a day overlapping with client timezone. Typically clients are in California PST Timezone.
Position:
- Full time/Direct
- We have great benefits such as PF, medical insurance, 12 annual company holidays, 12 PTO leaves per year, annual increments, Diwali bonus, spot bonuses and other incentives etc.
- We dont believe in locking in people with large notice periods. You will stay here because you love the company. We have only a 15 days notice period.
Skills and Qualification:
- Experience in working MEAN stack / Full Stack platform
- Front-end developer with knowledge of coding into NODE/EXPRESS, JavaScript, Jquery & its plugins.
- js/Redux/React-native.
- Knowledge of different NPM packages (Session, Async etc.).
- Knowledge of Mongo dB/Mongoose/Nosql.
- Knowledge of Any one framework (Mean.io/Ionic/Meteor).
- Experience in Backbone.js or Angular.js is a must.
- Knowledge of responsive design creation.
CSS3 / HTML / HTML5.
Roles and responsibilities:
- Proficient in SQL, NoSQL, relational database design and methods for efficiently retrieving data
- Experience building complex and non-interactive systems batch, distributed, etc.
- Be responsible for the technical design and experience in guiding/mentoring others.
- Strong OOPS knowledge, including experience with design patterns
- Experience with relational databases like MySQL/Oracle
- Experience with enterprise application servers like Tomcat/Weblogic/Websphere.
- Exposure to Web Services SOAP and RESTFUL
- Exposure to Agile/Scrum, TDD, and Continuous Integration tools like Jenkins, Bamboo etc.
- Excellent analytical aptitude and problem solving skills.
- Excellent communication and customer interfacing skill










