
Role: DevOps Engineer
Exp: 4 - 7 Years
CTC: up to 28 LPA
Key Responsibilities
• Design, build, and manage scalable infrastructure on cloud platforms (GCP, AWS, Azure, or OCI)
• Administer and optimize Kubernetes clusters and container runtimes (Docker, containerd)
• Develop and maintain CI/CD pipelines for multiple services and environments
• Manage infrastructure as code using tools like Terraform and/or Pulumi
• Automate operations with Python and shell scripting for deployment, monitoring, and maintenance
• Ensure high availability and performance of production systems and troubleshoot incidents effectively
• Monitor system metrics and implement observability best practices using tools like Prometheus, Grafana, ELK, etc.
• Collaborate with development, security, and product teams to align infrastructure with business needs
• Apply best practices in cloud networking, Linux administration, and configuration management
• Support compliance and security audits; assist with implementation of cloud security measures (e.g., firewalls, IDS/IPS, IAM hardening)
• Participate in on-call rotations and incident response activities

Similar jobs
We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems.
Key Responsibilities:
- Design, develop, test, and maintain scalable ETL data pipelines using Python.
- Work extensively on Google Cloud Platform (GCP) services such as:
- Dataflow for real-time and batch data processing
- Cloud Functions for lightweight serverless compute
- BigQuery for data warehousing and analytics
- Cloud Composer for orchestration of data workflows (based on Apache Airflow)
- Google Cloud Storage (GCS) for managing data at scale
- IAM for access control and security
- Cloud Run for containerized applications
- Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery.
- Implement and enforce data quality checks, validation rules, and monitoring.
- Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions.
- Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects.
- Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL.
- Document pipeline designs, data flow diagrams, and operational support procedures.
Required Skills:
- 4–8 years of hands-on experience in Python for backend or data engineering projects.
- Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.).
- Solid understanding of data pipeline architecture, data integration, and transformation techniques.
- Experience in working with version control systems like GitHub and knowledge of CI/CD practices.
- Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.).
1. Should have worked in Agile methodology and microservices architecture
2. Should have 7+ years of experience in Python and Django framework
3. Should have a good knowledge of DRF
4. Should have knowledge of User Auth (JWT, OAuth2), API Auth, Access Control List, etc.
5. Should have working experience in session management in Django
6. Should have expertise in the Django MVC and uses of templates in frontend
7. Should have working experience in PostgreSQL
8. Should have working experience in the RabbitMQ messaging channel and Celery Analytics
9. Good to have javascript implementation knowledge in Django templates
Job Summary:
We are seeking a skilled and experienced Java Developer with hands-on expertise in AWS, Spring Boot, and Microservices architecture. As a core member of our backend development team, you will design and build scalable cloud-native applications that support high-performance systems and business logic.
Key Responsibilities:
- Design, develop, and maintain backend services using Java (Spring Boot).
- Build and deploy microservices-based architectures hosted on AWS.
- Collaborate with DevOps and architecture teams to ensure scalable and secure cloud solutions.
- Write clean, efficient, and well-documented code.
- Optimize application performance and troubleshoot production issues.
- Participate in code reviews, technical discussions, and architecture planning.
Must-Have Skills:
- 4.5+ years of experience in Java development.
- Strong proficiency in Spring Boot and RESTful APIs.
- Proven hands-on experience with AWS services (EC2, S3, Lambda, RDS, etc.).
- Solid understanding of microservices architecture, CI/CD, and containerization tools.
- Experience with version control (Git), and deployment tools.
Good Experience & Understanding of CRM & ERP processes & Design
- 3+ years of hands-on experience in ERPNext Platform and Frappe Framework, Python, JavaScript and MySQL/MariaDB
- Strong knowledge of configuration and customisation of the ERPNext platform
PE Backend Developer
An opportunity to revolutionize the restaurant industry
Here, at Rebel Foods, we are using technology and automation to disrupt the traditional food industry. We are focused on building an operating system for Cloud Kitchens - using the most innovative technologies - to provide the best food experiences for our customers.
You will enjoy working with us, if:
- You are passionate about using technology to solve customer problems
- You are a software craftsman or craftswoman who is obsessed with high quality software
- You have a flair for good design and architecture
- You are unafraid of rearchitecting or refactoring code to improve it
- You are willing to dive deep to solve complex software issues
- You are a teacher and mentor
Our technology ecosystem:
- Languages: Java, Typescript, Javascript, Ruby
- Frameworks, environments: Spring Boot, NodeJS, ExpressJS
- Databases: AWS Aurora MySQL, MongoDB
- Cloud: AWS
- Microservices, Service Oriented Architecture, REST APIs, Caching, Messaging, Logging, Monitoring and Alerting
- CI/CD and DevOps
- Bitbucket, Jira
You will mostly spend time on the following:
- Leading the design and implementation of software systems
- Driving engineering initiatives across teams with a focus on quality, maintainability, availability, scalability, security, performance and stability
- Writing efficient, maintainable, scalable, high quality code
- Reviewing code and tests
- Refactoring and improving code
- Teaching and mentoring team members
We’re excited about you if you have:
- At least 8 years of experience in software development, including experience building microservices and distributed systems
- Excellent programming skills in one or more languages: Java, C#, C++, Typescript, Javascript, Python or Ruby
- Experience working in Cloud environments: AWS, Azure, GCP
- Experience building secure, configurable, observable services
- Excellent troubleshooting and problem-solving skills
- The ability to work in an Agile environment
- The ability to collaborate effectively within and across engineering, product and business teams
We value engineers who are:
- Crazy about customer experience
- Willing to challenge the status quo and innovate
- Obsessed with quality, performance and frugality
- Willing to take complete responsibility and ownership of results
- Team players, teachers, mentors
Job Description
· Strong Core Java / C++ experience with Strong Handson Coding
· Excellent understanding of Logical ,Object-oriented design patterns, algorithms and data structures.
· Sound knowledge of application access methods including authentication mechanisms, API quota limits, as well as different endpoint REST, Java etc
· Strong exp in databases - not just a SQL Programmer but with knowledge of DB internals
· Sound knowledge of Cloud database available as service is plus (RDS, CloudSQL, Google BigQuery, Snowflake )
· Experience working in any cloud environment and microservices based architecture utilizing GCP, Kubernetes, Docker, CircleCI, Azure or similar technologies
- Having around 8+ years of Experience in IT industry in Software Development.
- Sound knowledge in Core Java
- Having work experience in SDN/NFV, Orchestration
- Having work experience in Open source and Open Flow Controller(SDN).
- Experience in Aglie methodology.
- Good Knowledge on MySQL ,Postgresql or any Timeseries DB,Kafka, Zookeeper
- Good Knowledge on ONOS, ODL (OpenDaylight) OpenKilda,Mininet.
- Having work experience in MVT/MVC architecture.
- Having good knowledge networks, devices, service modeling and automation in systems.
- Having work experience on API & JSON implémentation.
- Knowledge in OpenStack,Ansible, Shellscript, Chef, Puppet,
- Good Understanding of Software Development (i.e. SDLC)
- Good team player enthusiastic and quick learner
- Good interpersonal skill, commitment, result oriented with a quest and learn new technologies and understanding challenging tasks
- Knowledge in AWS, AZURE Cloud
- Set up, manage, automate and deploy AI models in development and production infrastructure.
- Orchestrate life cycle management of AI Models
- Create APIs and help business customers put results of the AI models into operations
- Develop MVP ML learning models and prototype applications applying known AI models and verify the problem/solution fit
- Validate the AI Models
- Make model performant (time and space) based on the business needs
- Perform statistical analysis and fine-tuning using test results
- Train and retrain systems when necessary
- Extend existing ML libraries and frameworks
- Processing, cleansing, and verifying the integrity of data used for analysis
- Ensuring that algorithms generate accurate user recommendations/insights/outputs
- Keep abreast with the latest AI tools relevant to our business domain
- Bachelor’s or master's degree in Computer Science, Statistics or related field
- A Master’s degree in data analytics, or similar will be advantageous.
- 3 - 5 years of relevant experience in deploying AI models to production
- Understanding of data structures, data modeling, and software architecture
- Good knowledge of math, probability, statistics, and algorithms
- Ability to write robust code in Python/ R
- Proficiency in using query languages, such as SQL
- Familiarity with machine learning frameworks such as PyTorch, Tensorflow and libraries such as scikit-learn
- Worked with well know machine learning models ( SVM, clustering techniques, forecasting models, Random Forest, etc.)
- Having knowledge in CI/CD for the building and hosting the solutions
- We don’t expect you to be an expert or an AI researcher, but you must be able to take existing models and best practices and adapt them to our environment.
- Adherence to compliance procedures in accordance with regulatory standards, requirements, and policies.
- Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
- A flexible, pragmatic, and collaborative team player with an innate ability to engage with data architects, analysts, and scientists.









