
Job Role: Adaptive Autosar + Bootloader Developer
Mandatory Skills:
- Adaptive Autosar Development
- Bootloader Experience
- C++ Programming
- Hands-on experience with ISO 14229 (UDS Protocol)
- Experience in Flash Bootloader and Software Update topics
- Proficient in C++ and Python programming
- Application development experience in Service-Oriented Architectures
- Hands-on experience with QNX and Linux operating systems
- Familiarity with software development tools like CAN Analyzer, CANoe, and Debugger
- Strong problem-solving skills and the ability to work independently
- Exposure to the ASPICE Process is an advantage
- Excellent analytical and communication skills
Job Responsibilities:
- Engage in tasks related to the integration and development of Flash Bootloader (FBL) features and perform comprehensive testing activities.
- Collaborate continuously with counterparts in Germany to understand requirements and develop FBL features effectively.
- Create test specifications and meticulously document testing results.
Why Join InfoGrowth?
- Become part of an innovative team focused on transforming the automotive industry with cutting-edge technology.
- Work on exciting projects that challenge your skills and promote professional growth.
- Enjoy a collaborative environment that values teamwork and creativity.
š Apply Now to shape the future of automotive technology with InfoGrowth!

Similar jobs
REVIEW CRITERIA:
MANDATORY:
- Strong Senior/Lead DevOps Engineer Profile
- Must have 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Must have solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Must have hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Must have experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Must have good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
PREFERRED:
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelorās or Masterās degree in Computer Science, Information Technology, or related field.
- Candidates from NCR region only (No outstation candidates).
ROLES AND RESPONSIBILITIES:
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
KEY RESPONSIBILITIES:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
IDEAL CANDIDATE:
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Masterās degree
- Effective communication skills
PERKS, BENEFITS AND WORK CULTURE:
- Competitive Salary Package
- Generous Leave Policy
- Flexible Working Hours
- Performance-Based Bonuses
- Health Care Benefits

JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Ā
Job Description
Job Summary:
As aĀ DevOps EngineerĀ focused onĀ Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient withĀ GitHub EnterpriseĀ andĀ Perforce, possess strongĀ scripting skills (Python/Shell), and have a deep understanding ofĀ version control concepts.
The ideal candidate is aĀ self-starter, aĀ problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Ā
Key Responsibilities:
- Analyze and prepare Perforce repositories ā clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently usingĀ Git Large File Storage (LFS)Ā for files exceeding GitHubās 100MB size limit.
- UseĀ git-p4 fusionĀ (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope ā determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Ā
Required Qualifications:
- Strong knowledge ofĀ Git/GitHubĀ and preferablyĀ Perforce (Helix Core)Ā ā understanding of differences, workflows, and integrations.
- Hands-on experience withĀ P4-Fusion.
- Familiarity withĀ cloud platforms (AWS, Azure)Ā andĀ containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such asĀ git-p4 fusionĀ ā installation, configuration, and troubleshooting.
- Ability to identify and manage large files usingĀ Git LFSĀ to meet GitHub repository size limits.
- StrongĀ scripting skillsĀ inĀ PythonĀ andĀ ShellĀ for automating migration and restructuring tasks.
- Experience inĀ planning and executing source control migrationsĀ ā defining scope, branch mapping, history retention, and permission translation.
- Familiarity withĀ CI/CD pipeline integrationĀ to validate workflows post-migration.
- Understanding ofĀ source code management (SCM) best practices, including version history and repository organization in GitHub.
- ExcellentĀ communication and collaborationĀ skills for cross-team coordination and migration planning.
- Proven practical experience inĀ repository migration,Ā large file management, andĀ history preservationĀ during Perforce to GitHub transitions.
Ā
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Ā
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
Job Description:
As an Odoo Developer, you will play a crucial role in designing, developing, and implementing Odoo ERP solutions for our clients. You will work on both default Odoo modules and custom Odoo module development, ensuring that our clients' business processes are efficiently supported by the ERP system.
Key Responsibilities:
- Collaborate with project managers, business analysts, and clients to gather and understand requirements.
- Customize and configure Odoo ERP modules based on client needs.
- Develop and maintain custom Odoo modules to extend ERP functionality.
- Utilize Python for Odoo development, ensuring high-quality code.
- Create and optimize web-based user interfaces using HTML, CSS, and JavaScript (jQuery).
- Experience with RESTful and SOAP web services for integration purposes.
- Implement APIs in Odoo to connect with third-party applications.
- Test and debug Odoo modules to ensure they meet quality standards.
- Provide technical support and troubleshooting assistance for Odoo implementations.
- Keep abreast of Odoo updates and trends to suggest improvements and optimizations.
- Collaborate with the development team to enhance overall system performance and reliability.
Qualifications:
- Minimum of 2 years of experience as an Odoo Developer.
- Proficiency in Python for Odoo development.
- Strong knowledge of Odoo's default modules and features.
- Experience with HTML, CSS, jQuery, and JavaScript for web development.
- Familiarity with web services, including RESTful and SOAP.
- Previous experience in API implementation within Odoo.
- Problem-solving skills and attention to detail.
- Excellent communication skills.
- Ability to work independently and as part of a team.
- Adept at time management and meeting project deadlines.
Preferred Qualifications:
- Odoo certification is a plus.
- Experience with other programming languages or frameworks.
- Knowledge of database management (PostgreSQL).
- Previous experience with ERP implementations.
Benefits:
- Competitive salary
- Health and wellness benefits
- Opportunities for professional growth and development
- Collaborative and innovative work environment
Responsibility :
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team

Job Description:
⢠Contribute to customer discussions in collecting the requirement
⢠Engage in internal and customer POCās to realize the potential solutions envisaged for the customers.
⢠Design/Develop/Migrate VRA blueprints and VRO workflows; strong hands-on knowledge in vROPS and integrations with application and VMware solutions.
⢠Develop automation scripts to support the design and implementation of VMware projects.
Qualification:
⢠Maintain current, high-level technical knowledge of the entire VMware product portfolio and future product direction and In depth level knowledge
⢠Maintain deep technical and business knowledge of cloud computing and networking applications, industry directions, and trends.
⢠Experience with REST API and/or Python programming. TypeScript/NodeJS backend experience
⢠Experience with Kubernetes
⢠Familiarity with DevOps tools like Ansible, Puppet, Terraform
⢠End to end experience in Architecture, Design and Development of VMware Cloud Automation suite with good exposure to VMware products and/or Solutions.
⢠Hands-on experience in automation, coding, debugging and release.
⢠Sound process knowledge from requirement gathering, implementation, deployment and Support.
⢠Experience in working with global teams, customers and partners with solid communication skills.
⢠VMware CMA certification would be a plus
⢠Academic background in MS/BE/B-Tech/ IT/CS/ECE/EE would be preferred.

Platform Services Engineer
DevSecOps Engineer
- Strong Systems Experience- Linux, networking, cloud, APIs
- Scripting language Programming - Shell, Python
- Strong Debugging Capability
- AWS Platform -IAM, Network,EC2, Lambda, S3, CloudWatch
- Knowledge on Terraform, Packer, Ansible, Jenkins
- Observability - Prometheus, InfluxDB, Dynatrace,
- Grafana, Splunk ⢠DevSecOps-CI/CD - Jenkins
- Microservices
- Security & Access Management
- Container Orchestration a plus - Kubernetes, Docker etc.
- Big Data Platforms knowledge EMR, Databricks. Cloudera a plus
Hammoq is an exponentially growing Startup in US and UK.Ā
Design and implement secure automation solutions for development, testing, and production environments
-
Build and deploy automation, monitoring, and analysis solutions
-
Manage our continuous integration and delivery pipeline to maximize efficiency
-
Implement industry best practices for system hardening and configuration management
-
Secure, scale, and manage Linux virtual environments
-
Develop and maintain solutions for operational administration, system/data backup, disaster recovery, and security/performance monitoring
-
Continuously evaluate existing systems with industry standards, and make recommendations for improvement
Desired Skills & Experiences
-
Bachelorās or Master's degree in Computer Science, Engineering, or related field
-
Understanding of system administration in Linux environments
-
Strong knowledge of configuration management tools
-
Familiarity with continuous integration tools such as Jenkins, Travis CI, Circle CI
-
Proficiency in scripting languages including Bash, Python, and JavaScript
-
Strong communication and documentation skills
-
An ability to drive to goals and milestones while valuing and maintaining a strong attention to detail
-
Excellent judgment, analytical thinking, and problem-solving skills
-
Full understanding of software development lifecycle best practices
-
Self-motivated individual that possesses excellent time management and organizational skills
In PM's WordsĀ
Bash scripting, Containerd(or docker), Linux Operating system basics, kubernetes, git, Jenkins ( or any pipeline management), GCP ( or idea on any cloud technology)
Linux is major..most of the people are coming from Windows.. we need Linux.. and if windows is also there it will be added advantage
There is utmost certainilty that you will be working with an amazing team...
Mandatory:
ā A minimum of 1 year of development, system design or engineering experience ā
Excellent social, communication, and technical skills
ā In-depth knowledge of Linux systems
ā Development experience in at least two of the following languages: Php, Go, Python,
JavaScript, C/C++, Bash
ā In depth knowledge of web servers (Apache, NgNix preferred)
ā Strong in using DevOps tools - Ansible, Jenkins, Docker, ELK
ā Knowledge to use APM tools, NewRelic is preferred
ā Ability to learn quickly, master our existing systems and identify areas of improvement
ā Self-starter that enjoys and takes pride in the engineering work of their team ā Tried
and Tested Real-world Cloud Computing experience - AWS/ GCP/ Azure ā Strong
Understanding of Resilient Systems design
ā Experience in Network Design and Management
- Hands onĀ experience in following is a must:Ā Unix, Python and Shell Scripting.
- Hands on experience in creating infrastructure on cloud platformĀ AWSĀ is a must.
- Must have experience in industry standardĀ CI/CD tools like Git/BitBucket, Jenkins, Maven, Artifactory and Chef.
- Must be good at these DevOps tools:
Ā Ā Ā Ā Ā Ā Ā Version Control Tools:Ā Git, CVS
Ā Ā Ā Ā Ā Ā Ā Build Tools:Ā Maven and Gradle
Ā Ā Ā Ā Ā Ā Ā CI Tools:Ā Jenkins
- Hands-on experience with Analytics tools, ELK stack.
- Knowledge of Java will be an advantage.
- Experience designing and implementing an effective and efficient CI/CD flow that gets code from dev to prod with high quality and minimal manual effort.
- Ability to help debug and optimise code and automate routine tasks.
- Should be extremely good in communication
- Experience in dealing with difficult situations and making decisions with a sense of urgency.
- Experience in Agile and Jira will be an add on







