
Similar jobs

Role Proficiency:
This role requires proficiency in developing data pipelines including coding and testing for ingesting wrangling transforming and joining data from various sources. The ideal candidate should be adept in ETL tools like Informatica Glue Databricks and DataProc with strong coding skills in Python PySpark and SQL. This position demands independence and proficiency across various data domains. Expertise in data warehousing solutions such as Snowflake BigQuery Lakehouse and Delta Lake is essential including the ability to calculate processing costs and address performance issues. A solid understanding of DevOps and infrastructure needs is also required.
Skill Examples:
- Proficiency in SQL Python or other programming languages used for data manipulation.
- Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF.
- Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery).
- Conduct tests on data pipelines and evaluate results against data quality and performance specifications.
- Experience in performance tuning.
- Experience in data warehouse design and cost improvements.
- Apply and optimize data models for efficient storage retrieval and processing of large datasets.
- Communicate and explain design/development aspects to customers.
- Estimate time and resource requirements for developing/debugging features/components.
- Participate in RFP responses and solutioning.
- Mentor team members and guide them in relevant upskilling and certification.
Knowledge Examples:
- Knowledge of various ETL services used by cloud providers including Apache PySpark AWS Glue GCP DataProc/Dataflow Azure ADF and ADLF.
- Proficient in SQL for analytics and windowing functions.
- Understanding of data schemas and models.
- Familiarity with domain-related data.
- Knowledge of data warehouse optimization techniques.
- Understanding of data security concepts.
- Awareness of patterns frameworks and automation practices.
Additional Comments:
# of Resources: 22 Role(s): Technical Role Location(s): India Planned Start Date: 1/1/2026 Planned End Date: 6/30/2026
Project Overview:
Role Scope / Deliverables: We are seeking highly skilled Data Engineer with strong experience in Databricks, PySpark, Python, SQL, and AWS to join our data engineering team on or before 1st week of Dec, 2025.
The candidate will be responsible for designing, developing, and optimizing large-scale data pipelines and analytics solutions that drive business insights and operational efficiency.
Design, build, and maintain scalable data pipelines using Databricks and PySpark.
Develop and optimize complex SQL queries for data extraction, transformation, and analysis.
Implement data integration solutions across multiple AWS services (S3, Glue, Lambda, Redshift, EMR, etc.).
Collaborate with analytics, data science, and business teams to deliver clean, reliable, and timely datasets.
Ensure data quality, performance, and reliability across data workflows.
Participate in code reviews, data architecture discussions, and performance optimization initiatives.
Support migration and modernization efforts for legacy data systems to modern cloud-based solutions.
Key Skills:
Hands-on experience with Databricks, PySpark & Python for building ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, version control (Git), and workflow orchestration (Airflow preferred).
Excellent problem-solving, communication, and collaboration skills.
Skills: Databricks, Pyspark & Python, Sql, Aws Services
Must-Haves
Python/PySpark (5+ years), SQL (5+ years), Databricks (3+ years), AWS Services (3+ years), ETL tools (Informatica, Glue, DataProc) (3+ years)
Hands-on experience with Databricks, PySpark & Python for ETL/ELT pipelines.
Proficiency in SQL (performance tuning, complex joins, CTEs, window functions).
Strong understanding of AWS services (S3, Glue, Lambda, Redshift, CloudWatch, etc.).
Experience with data modeling, schema design, and performance optimization.
Familiarity with CI/CD pipelines, Git, and workflow orchestration (Airflow preferred).
******
Notice period - Immediate to 15 days
Location: Bangalore
The MIS Executive will manage and analyze data, generate reports, and provide insights to support effective decision-making within the organization. The role involves maintaining and enhancing the Management Information System (MIS) to ensure accurate and timely information flow.
Key Responsibilities:
· Collect, compile, and analyze data from various sources to create comprehensive reports.
· Develop and maintain efficient databases, ensuring data accuracy and integrity.
· Generate and distribute daily, weekly, and monthly reports to relevant stakeholders.
· Implement data validation and cleansing processes to ensure data accuracy.
· Collaborate with different departments to understand their data requirements and provide support in extracting relevant information.
· Identify and implement process improvements to enhance data quality and reporting efficiency.
· Conduct regular audits of data to identify and rectify discrepancies.
· Stay updated on industry trends and best data management and reporting practices.
Qualifications/Requirements:
Education:
· Bachelor's degree in any field.
Experience:
· Minimum 2 years of proven experience in an MIS Executive role.
· Jewelry industry experience is good to have
Required Skills:
· Skilled in product cost analysis, order management, invoicing knowledge, and comprehensive inventory oversight.
· Strong analytical and problem-solving skills.
· Proficient in data analysis tools and Microsoft Excel.
· Database management systems and SQL.
· Excellent communication and interpersonal skills.
· Ability to work independently and collaboratively in a team.
· Detail-oriented with a high level of accuracy.
Role Overview
We are looking for a passionate Software Engineer with 1–3 years of hands-on experience in backend engineering to join our team in Mumbai. The ideal candidate will have strong programming skills in GoLang, a solid understanding of SQL databases, and exposure to or interest in High Performance Computing (HPC) concepts. You will be responsible for designing, developing, optimizing, and maintaining backend services that are scalable, efficient, and secure.
Key Responsibilities
- Develop, build, and maintain backend services and microservices using GoLang
- Design and optimize database schemas and write efficient SQL queries for relational databases
- Work on high-performance applications by optimizing code, memory usage, and execution speed
- Collaborate with cross-functional teams including frontend, DevOps, QA, and product
- Participate in code reviews, troubleshoot production issues, and follow best engineering practices
- Contribute to improving system performance, reliability, and scalability
- Stay up to date with emerging backend technologies, tools, and frameworks
Required Skills
Technical Skills
- 1–5 years of experience in backend development
- Strong hands-on experience with GoLang (Golang)
- Good understanding of SQL and relational database design
- Exposure to or understanding of HPC concepts such as concurrency, parallelism, distributed processing, or performance optimization
- Experience with RESTful APIs and microservice architectures
- Familiarity with version control systems (Git)
Soft Skills
- Strong analytical and problem-solving abilities
- Ability to work effectively in a fast-paced, collaborative team environment
- Good communication and documentation skills
- Strong ownership mindset with a willingness to learn
Good to Have
- Experience with cloud platforms (AWS, Azure, or GCP)
- Knowledge of Docker or other containerization tools
- Understanding of CI/CD pipelines
- Experience with performance profiling and monitoring tools
Education
- Bachelor’s degree in Computer Science, Engineering, or a related technical field
Why Join Oneture Technologies?
- Opportunity to work on high-impact, modern technology projects
- Learning-driven culture with strong mentorship and continuous upskilling
- Exposure to cloud-native and cutting-edge backend technologies
- Collaborative, startup-like environment with real ownership of projects
🚀 We're Hiring: AUTOSAR Ethernet Experts at InfoGrowth! 🚀
Join InfoGrowth as an AUTOSAR Ethernet Expert and contribute to pioneering advancements in automotive technology!
Job Role: AUTOSAR Ethernet Expert
Mandatory Skills:
- Embedded C programming
- AUTOSAR knowledge
- Ethernet expertise
Key Requirements:
- Hands-on experience with Ethernet stack integration/development
- Experience with Ethernet switch topics
- Proficiency in Embedded C programming
- Experience with Marvell Ethernet Switch (88Q5050/88Q5XXX)
- Familiarity with software development tools such as CAN Analyzer, CANoe, and Debugger
- Strong problem-solving skills with the ability to address technical issues independently
- Experience with Groovy or any scripting language is a plus
- Knowledge of the ASPICE process is an added advantage
- Excellent analytical and communication skills
Job Responsibilities:
- Engage in tasks related to the integration and development of Flash Bootloader (FBL) features, alongside testing activities.
- Collaborate continuously with counterparts in Germany to understand requirements and effectively interpret and develop FBL features.
- Responsible for creating test specifications and accurately documenting results.
Why Join InfoGrowth?
- Be a part of an innovative team dedicated to transforming the automotive industry with cutting-edge software solutions.
- Work on exciting projects that challenge your skills and foster professional growth.
- Enjoy a collaborative and dynamic work environment that values creativity and teamwork.
🔗 Apply Now to help drive the future of automotive technology with InfoGrowth!
Responsibilities:
- Possessing excellent product knowledge to enhance customer experience
- Ability to conduct product demos, understand customer requirements and recommend solutions for different business use cases using our suite of tools & platforms.
- Assist sales team with upsell of new products to customers based on usage...
- Proactively engage with customers to provide support & assist while onboarding to reduce their time to "Glam! moments."
- Maintaining a pleasant working environment for your team
Basic Requirements:
- A bachelor’s degree in administration or related field.
- Excellent interpersonal, written & oral communication skills
- Usage of innovative methods for conflict resolution
- Must be fluent in English & Hindi
Job Description:
- Understanding about depth and breadth of computer vision and deep learning algorithms.
- At least 2 years of experience in computer vision and or deep learning for object detection and tracking along with semantic or instance segmentation either in the academic or industrial domain.
- Experience with any machine deep learning frameworks like Tensorflow, Keras, Scikit-Learn and PyTorch.
- Experience in training models through GPU computing using NVIDIA CUDA or on the cloud.
- Ability to transform research articles into working solutions to solve real-world problems.
- Strong experience in using both basic and advanced image processing algorithms for feature engineering.
- Proficiency in Python and related packages like numpy, scikit-image, PIL, opencv, matplotlib, seaborn, etc.
- Excellent written and verbal communication skills for effectively communicating with the team and ability to present information to a varied technical and non-technical audiences.
- Must be able to produce solutions independently in an organized manner and also be able to work in a team when required.
- Must have good Object-Oriented Programing & logical analysis skills in Python.








