11+ Postfix Jobs in Chennai | Postfix Job openings in Chennai
Apply to 11+ Postfix Jobs in Chennai on CutShort.io. Explore the latest Postfix Job opportunities across top companies like Google, Amazon & Adobe.
About us:
HappyFox is a software-as-a-service (SaaS) support platform. We offer an enterprise-grade help desk ticketing system and intuitively designed live chat software.
We serve over 12,000 companies in 70+ countries. HappyFox is used by companies that span across education, media, e-commerce, retail, information technology, manufacturing, non-profit, government and many other verticals that have an internal or external support function.
To know more, Visit! - https://www.happyfox.com/
Responsibilities
- Build and scale production infrastructure in AWS for the HappyFox platform and its products.
- Research, Build/Implement systems, services and tooling to improve uptime, reliability and maintainability of our backend infrastructure. And to meet our internal SLOs and customer-facing SLAs.
- Implement consistent observability, deployment and IaC setups
- Lead incident management and actively respond to escalations/incidents in the production environment from customers and the support team.
- Hire/Mentor other Infrastructure engineers and review their work to continuously ship improvements to production infrastructure and its tooling.
- Build and manage development infrastructure, and CI/CD pipelines for our teams to ship & test code faster.
- Lead infrastructure security audits
Requirements
- At least 7 years of experience in handling/building Production environments in AWS.
- At least 3 years of programming experience in building API/backend services for customer-facing applications in production.
- Proficient in managing/patching servers with Unix-based operating systems like Ubuntu Linux.
- Proficient in writing automation scripts or building infrastructure tools using Python/Ruby/Bash/Golang
- Experience in deploying and managing production Python/NodeJS/Golang applications to AWS EC2, ECS or EKS.
- Experience in security hardening of infrastructure, systems and services.
- Proficient in containerised environments such as Docker, Docker Compose, Kubernetes
- Experience in setting up and managing test/staging environments, and CI/CD pipelines.
- Experience in IaC tools such as Terraform or AWS CDK
- Exposure/Experience in setting up or managing Cloudflare, Qualys and other related tools
- Passion for making systems reliable, maintainable, scalable and secure.
- Excellent verbal and written communication skills to address, escalate and express technical ideas clearly
- Bonus points – Hands-on experience with Nginx, Postgres, Postfix, Redis or Mongo systems.
Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
· Strong knowledge on Windows and Linux
· Experience working in Version Control Systems like git
· Hands-on experience in tools Docker, SonarQube, Ansible, Kubernetes, ELK.
· Basic understanding of SQL commands
· Experience working on Azure Cloud DevOps
Experience: 3+ years of experience in Cloud Architecture
About Company:
The company is a global leader in secure payments and trusted transactions. They are at the forefront of the digital revolution that is shaping new ways of paying, living, doing business and building relationships that pass on trust along the entire payments value chain, enabling sustainable economic growth. Their innovative solutions, rooted in a rock-solid technological base, are environmentally friendly, widely accessible and support social transformation.
Cloud Architect / Lead
- Role Overview
- Senior Engineer with a strong background and experience in cloud related technologies and architectures. Can design target cloud architectures to transform existing architectures together with the in-house team. Can actively hands-on configure and build cloud architectures and guide others.
- Key Knowledge
- 3-5+ years of experience in AWS/GCP or Azure technologies
- Is likely certified on one or more of the major cloud platforms
- Strong experience from hands-on work with technologies such as Terraform, K8S, Docker and orchestration of containers.
- Ability to guide and lead internal agile teams on cloud technology
- Background from the financial services industry or similar critical operational experience
Bachelor's degree in Computer Science or a related field, or equivalent work experience
Strong understanding of cloud infrastructure and services, such as AWS, Azure, or Google Cloud Platform
Experience with infrastructure as code tools such as Terraform or CloudFormation
Proficiency in scripting languages such as Python, Bash, or PowerShell
Familiarity with DevOps methodologies and tools such as Git, Jenkins, or Ansible
Strong problem-solving and analytical skills
Excellent communication and collaboration skills
Ability to work independently and as part of a team
Willingness to learn new technologies and tools as required
Laravel Developer Responsibilities:
- Discussing project aims with the client and development team.
- Designing and building web applications using Laravel.
- Troubleshooting issues in the implementation and debug builds.
- Working with front-end and back-end developers on projects.
- Testing functionality for users and the backend.
- Ensuring that integrations run smoothly.
- Scaling projects based on client feedback.
- Recording and reporting on work done in Laravel.
- Maintaining web-based applications.
- Presenting work in meetings with clients and management.
Laravel Developer Requirements:
- A degree in programming, computer science, or a related field.
- Experience working with PHP, performing unit testing, and managing APIs such as REST.
- A solid understanding of application design using Laravel.
- Knowledge of database design and querying using SQL.
- Proficiency in HTML and JavaScript.
- Practical experience using the MVC architecture.
- A portfolio of applications and programs to your name.
- Problem-solving skills and critical mindset.
- Great communication skills.
- The desire and ability to learn.
Note: We are hiring Tamil Nadu candidates only..
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Job description
The ideal candidate is a self-motivated, multi-tasker, and demonstrated team player. You will be a lead developer responsible for the development of new software security policies and enhancements to security on existing products. You should excel in working with large-scale applications and frameworks and have outstanding communication and leadership skills.
Responsibilities
- Consulting with management on the operational requirements of software solutions.
- Contributing expertise on information system options, risk, and operational impact.
- Mentoring junior software developers in gaining experience and assuming DevOps responsibilities.
- Managing the installation and configuration of solutions.
- Collaborating with developers on software requirements, as well as interpreting test stage data.
- Developing interface simulators and designing automated module deployments.
- Completing code and script updates, as well as resolving product implementation errors.
- Overseeing routine maintenance procedures and performing diagnostic tests.
- Documenting processes and monitoring performance metrics.
- Conforming to best practices in network administration and cybersecurity.
Qualifications
- Minimum of 2 years of hands-on experience in software development and DevOps, specifically managing AWS Infrastructure such as EC2s, RDS, Elastic cache, S3, IAM, cloud trail and other services provided by AWS.
- Experience Building a multi-region highly available auto-scaling infrastructure that optimises performance and cost. plan for future infrastructure as well as Maintain & optimise existing infrastructure.
- Conceptualise, architect and build automated deployment pipelines in a CI/CD environment like Jenkins.
- Conceptualise, architect and build a containerised infrastructure using Docker, Mesosphere or similar SaaS platforms.
- Conceptualise, architect and build a secured network utilising VPCs with inputs from the security team.
- Work with developers & QA to institute a policy of Continuous Integration with Automated testing Architect, build and manage dashboards to provide visibility into delivery, production application functional and performance status.
- Work with developers to institute systems, policies and workflows which allow for rollback of deployments Triage release of applications to production environment on a daily basis.
- Interface with developers and triage SQL queries that need to be executed in production environments.
- Assist the developers and on calls for other teams with post mortem, follow up and review of issues affecting production availability.
- Minimum 2 years’ experience in Ansible.
- Must have written playbook to automate provisioning of AWS infrastructure as well as automation of routine maintenance tasks.
- Must have had prior experience automating deployments to production and lower environments.
- Experience with APM tools like New Relic and log management tools.
- Our entire platform is hosted on AWS, comprising of web applications, webservices, RDS, Redis and Elastic Search clusters and several other AWS resources like EC2, S3, Cloud front, Route53 and SNS.
- Essential Functions System Architecture Process Design and Implementation
- Minimum of 2 years scripting experience in Ruby/Python (Preferable) and Shell Web Application Deployment Systems Continuous Integration tools (Ansible)Establishing and enforcing Network Security Policy (AWS VPC, Security Group) & ACLs.
- Establishing and enforcing systems monitoring tools and standards
- Establishing and enforcing Risk Assessment policies and standards
- Establishing and enforcing Escalation policies and standards
- Hands on experience in AWS provisioning of AWS services like EC2, S3,EBS, AMI, VPC, ELB, RDS, Auto scaling groups, Cloud Formation.
- Good experience on Build and release process and extensively involved in the CICD using
Jenkins
- Experienced on configuration management tools like Ansible.
- Designing, implementing and supporting fully automated Jenkins CI/CD
- Extensively worked on Jenkins for continuous Integration and for end to end Automation for all Builds and Deployments.
- Proficient with Docker based container deployments to create shelf environments for dev teams and containerization of environment delivery for releases.
- Experience working on Docker hub, creating Docker images and handling multiple images primarily for middleware installations and domain configuration.
- Good knowledge in version control system in Git and GitHub.
- Good experience in build tools
- Implemented CI/CD pipeline using Jenkins, Ansible, Docker, Kubernetes ,YAML and Manifest
DevOps Consultant!! MERN Stack Project Manager – Systems (Enterprise or Solutions) Architect needed!
Hello superstar,
I appreciate you taking time to read this. I have posted a job for developers to work on a start-up, the link is ......
I would need someone with DevOps experience, to ensure that the project is undertaken with the highest standards possible. I have had many experiences where ‘completed’ software after years of development was filled with bugs and it would be more cost-effective to start from scratch than to attempt to find and correct all the bugs.
I have attempted to learn as much as possible, but I now have an opportunity and it would better serve the venture to have someone handle the management of the project to ensure that;
- We choose the most appropriate technology
- We choose competent developers in those technologies
- The architecture and data modeling are clearly defined in a ‘blueprint’ plan
- A DevOps environment and processes are set up and the developers understand what is required
- Proper tests are carried out to ensure everything works as intended
- There are processes for testers to follow and competent testers are selected to follow them
- Accessibility, localization, and internationalization are planned ahead of time
- Security, scalability, and other future probabilities that I may not even be aware of are considered and planned ahead of time
- Documentation and code reviews, refactoring and other quality assurance processes are undertaken
- Working software is produced and systems that enable new developers or teams of people to easily take over and/or contribute new modules or updates in a controlled and organized fashion
- Cost estimates or budgets/projections or use of SaaS, hosting and other 3rd party services and applications
I am more concerned with a professional and world-class organizational system than with any particular type of software been produced as the strong foundation will enable anything to be creating with efficacy and precision.
Again, thank you for reading this, please reply with the word “superstar” anywhere in the second line of your response. I look forward to hearing from you.
Warm wishes DevOps Evangelist,





