

We are seeking a skilled DevOps Engineer with 3+ years of experience to join our team on a permanent work-from-home basis.
Responsibilities:
- Develop and maintain infrastructure using Ansible.
- Write Ansible playbooks.
- Implement CI/CD pipelines.
- Manage GitLab repositories.
- Monitor and troubleshoot infrastructure issues.
- Ensure security and compliance.
- Document best practices.
Qualifications:
- Proven DevOps experience.
- Expertise with Ansible and CI/CD pipelines.
- Proficient with GitLab.
- Strong scripting skills.
- Excellent problem-solving and communication skills.
Regards,
Aishwarya M
Associate HR

About NetSquare Solutions
About
Similar jobs
Salesforce DevOps/Release Engineer
Resource type - Salesforce DevOps/Release Engineer
Experience - 5 to 8 years
Norms - PF & UAN mandatory
Resource Availability - Immediate or Joining time in less than 15 days
Job - Remote
Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)
Required Experience:
- 5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment management.
- Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
- Significant hands-on experience with at least two of the following tools: Gearset, Copado,Flosum.
- Solid understanding of Salesforce architecture, metadata, and development lifecycle.
- Familiarity with version control systems (e.g., Git) and agile methodologies
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset, Copado, or Flosum.
- Automate and optimize deployment processes to ensure efficient, reliable, and repeatable releases across Salesforce environments.
- Collaborate with development, QA, and operations teams to gather requirements and ensurealignment of deployment strategies.
- Monitor, troubleshoot, and resolve deployment and release issues.
- Maintain documentation for deployment processes and provide training on best practices.
- Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills:
- Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
- CI/CDBuilding and maintaining pipelines, automation, and release management
- Version ControlProficiency with Git and related workflows
- Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
- Scripting
- Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
- Communication
- Strong written and verbal communication skills
Preferred Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications:
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.
tools, build pile lines, deployment and production support.
• Maintaining mission critical day-to-day operations of the production processing platform and internal company resources.
• shall have sound and working knowledge of various devops tools like git, gitlab, jUnit, sonarqube, Jenkins, nagios etc.
• Shall have working knowledge of mentioned devOps tools.
• Analysing, executing and streamlining devops tools and practices.
• Automating processes with rights tools.
• Facilitating development process & operations.
• Establishing suitable devops channel across the organization.
• Setting up CI-CD to speed up software development and deployment process.
• Monitoring, reviewing and managing technical operations.
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
- Candidate should be able to write the sample programs using the Tools (Bash, PowerShell, Python or Shell scripting)
- Analytical/logical reasoning
- GitHub Actions
- Should have good working experience with GitHub Actions
- Repository/Workflow Dispatch, writing reusable workflows, etc
- AZ CLI commands
- Hands-on experience with AZ CLI commands
Exp:8 to 10 years notice periods 0 to 20 days
Job Description :
- Provision Gcp Resources Based On The Architecture Design And Features Aligned With Business Objectives
- Monitor Resource Availability, Usage Metrics And Provide Guidelines For Cost And Performance Optimization
- Assist It/Business Users Resolving Gcp Service Related Issues
- Provide Guidelines For Cluster Automation And Migration Approaches And Techniques Including Ingest, Store, Process, Analyse And Explore/Visualise Data.
- Provision Gcp Resources For Data Engineering And Data Science Projects.
- Assistance With Automated Data Ingestion, Data Migration And Transformation(Good To Have)
- Assistance With Deployment And Troubleshooting Applications In Kubernetes.
- Establish Connections And Credibility In How To Address The Business Needs Via Design And Operate Cloud-Based Data Solutions
Key Responsibilities / Tasks :
- Building complex CI/CD pipelines for cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud Platform
- Building deployment pipeline with Github CI (Actions)
- Building terraform codes to deploy infrastructure as a code
- Working with deployment and troubleshooting of Docker, GKE, Openshift, and Cloud Run
- Working with Cloud Build, Cloud Composer, and Dataflow
- Configuring software to be monitored by Appdynamics
- Configuring stackdriver logging and monitoring in GCP
- Work with splunk, Kibana, Prometheus and grafana to setup dashboard
Your skills, experience, and qualification :
- Total experience of 5+ Years, in as Devops. Should have at least 4 year of experience in Google could and Github CI.
- Should have strong experience in Microservices/API.
- Should have strong experience in Devops tools like Gitbun CI, teamcity, Jenkins and Helm.
- Should know Application deployment and testing strategies in Google cloud platform.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Excellent understanding of Java
- Knowledge on Kafka, ZooKeeper, Hazelcast, Pub/Sub is nice to have.
- Understanding of cloud networking, security such as software defined networking/firewalls, virtual networks and load balancers.
- Understanding of cloud identity and access
- Understanding of the compute runtime and the differences between native compute, virtual and containers
- Configuration and managing databases such as Oracle, Cloud SQL, and Cloud Spanner.
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies
- Awareness of critical concepts of Agile principles
- Certification in Google professional Cloud DevOps Engineer is desirable.
- Experience with Agile/SCRUM environment.
- Familiar with Agile Team management tools (JIRA, Confluence)
- Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment, Courage)
- Good communication skills
- Pro-active team player
- Comfortable working in multi-disciplinary, self-organized teams
- Professional knowledge of English
- Differentiators : knowledge/experience about
Srijan Technologies is hiring for the DevOps Lead position- Cloud Team with a permanent WFH option.
Immediate Joiners or candidates with 30 days notice period are preferred.
Requirements:-
- Minimum 4-6 Years experience in DevOps Release Engineering.
- Expert-level knowledge of Git.
- Must have great command over Kubernetes
- Certified Kubernetes Administrator
- Expert-level knowledge of Shell Scripting & Jenkins so as to maintain continuous integration/deployment infrastructure.
- Expert level of knowledge in Docker.
- Expert level of Knowledge in configuration management and provisioning toolchain; At least one of Ansible / Chef / Puppet.
- Basic level of web development experience and setup: Apache, Nginx, MySQL
- Basic level of familiarity with Agile/Scrum process and JIRA.
- Expert level of Knowledge in AWS Cloud Services.

- Hands-on experience building database-backed web applications using Python based frameworks
- Excellent knowledge of Linux and experience developing Python applications that are deployed in Linux environments
- Experience building client-side and server-side API-level integrations in Python
- Experience in containerization and container orchestration systems like Docker, Kubernetes, etc.
- Experience with NoSQL document stores like the Elastic Stack (Elasticsearch, Logstash, Kibana)
- Experience in using and managing Git based version control systems - Azure DevOps, GitHub, Bitbucket etc.
- Experience in using project management tools like Jira, Azure DevOps etc.
- Expertise in Cloud based development and deployment using cloud providers like AWS or Azure
A.P.T Portfolio, a high frequency trading firm that specialises in Quantitative Trading & Investment Strategies.Founded in November 2009, it has been a major liquidity provider in global Stock markets.
As a manager, you would be incharge of managing the devops team and your remit shall include the following
- Private Cloud - Design & maintain a high performance and reliable network architecture to support HPC applications
- Scheduling Tool - Implement and maintain a HPC scheduling technology like Kubernetes, Hadoop YARN Mesos, HTCondor or Nomad for processing & scheduling analytical jobs. Implement controls which allow analytical jobs to seamlessly utilize ideal capacity on the private cloud.
- Security - Implementing best security practices and implementing data isolation policy between different divisions internally.
- Capacity Sizing - Monitor private cloud usage and share details with different teams. Plan capacity enhancements on a quarterly basis.
- Storage solution - Optimize storage solutions like NetApp, EMC, Quobyte for analytical jobs. Monitor their performance on a daily basis to identify issues early.
- NFS - Implement and optimize latest version of NFS for our use case.
- Public Cloud - Drive AWS/Google-Cloud utilization in the firm for increasing efficiency, improving collaboration and for reducing cost. Maintain the environment for our existing use cases. Further explore potential areas of using public cloud within the firm.
- BackUps - Identify and automate back up of all crucial data/binary/code etc in a secured manner at such duration warranted by the use case. Ensure that recovery from back-up is tested and seamless.
- Access Control - Maintain password less access control and improve security over time. Minimize failures for automated job due to unsuccessful logins.
- Operating System -Plan, test and roll out new operating system for all production, simulation and desktop environments. Work closely with developers to highlight new performance enhancements capabilities of new versions.
- Configuration management -Work closely with DevOps/ development team to freeze configurations/playbook for various teams & internal applications. Deploy and maintain standard tools such as Ansible, Puppet, chef etc for the same.
- Data Storage & Security Planning - Maintain a tight control of root access on various devices. Ensure root access is rolled back as soon the desired objective is achieved.
- Audit access logs on devices. Use third party tools to put in a monitoring mechanism for early detection of any suspicious activity.
- Maintaining all third party tools used for development and collaboration - This shall include maintaining a fault tolerant environment for GIT/Perforce, productivity tools such as Slack/Microsoft team, build tools like Jenkins/Bamboo etc
Qualifications
- Bachelors or Masters Level Degree, preferably in CSE/IT
- 10+ years of relevant experience in sys-admin function
- Must have strong knowledge of IT Infrastructure, Linux, Networking and grid.
- Must have strong grasp of automation & Data management tools.
- Efficient in scripting languages and python
Desirables
- Professional attitude, co-operative and mature approach to work, must be focused, structured and well considered, troubleshooting skills.
- Exhibit a high level of individual initiative and ownership, effectively collaborate with other team members.
APT Portfolio is an equal opportunity employer
What is the role?
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
What are we looking for?
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment will enable you to juggle between concepts yet maintain the quality of content, interact, share your ideas, and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

Role : DevOps Engineer
Experience : 1 to 2 years and 2 to 5 Years as DevOps Engineer (2 Positions)
Location : Bangalore. 5 Days Working.
Education Qualification : Tech/B.E for Tier-1/Tier-2/Tier-3 Colleges or equivalent institutes
Skills :- DevOps Engineering, Ruby On Rails or Python and Bash/Shell skills, Docker, rkt or similar container engine, Kubernetes or similar clustering solutions
As DevOps Engineer, you'll be part of the team building the stage for our Software Engineers to work on, helping to enhance our product performance and reliability.
Responsibilities:
- Build & operate infrastructure to support website, backed cluster, ML projects in the organization.
- Helping teams become more autonomous and allowing the Operation team to focus on improving the infrastructure and optimizing processes.
- Delivering system management tooling to the engineering teams.
- Working on your own applications which will be used internally.
- Contributing to open source projects that we are using (or that we may start).
- Be an advocate for engineering best practices in and out of the company.
- Organizing tech talks and participating in meetups and representing Box8 at industry events.
- Sharing pager duty for the rare instances of something serious happening. ∙ Collaborate with other developers to understand & setup tooling needed for Continuous Integration/Delivery/Deployment (CI/CD) practices.
Requirements:
- 1+ Years Of Industry Experience Scale existing back end systems to handle ever increasing amounts of traffic and new product requirements.
- Ruby On Rails or Python and Bash/Shell skills.
- Experience managing complex systems at scale.
- Experience with Docker, rkt or similar container engine.
- Experience with Kubernetes or similar clustering solutions.
- Experience with tools such as Ansible or Chef Understanding of the importance of smart metrics and alerting.
- Hands on experience with cloud infrastructure provisioning, deployment, monitoring (we are on AWS and use ECS, ELB, EC2, Elasticache, Elasticsearch, S3, CloudWatch).
- Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
- Knowledge of data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
- Experience in working on linux based servers.
- Managing large scale production grade infrastructure on AWS Cloud.
- Good Knowledge on scripting languages like ruby, python or bash.
- Experience in creating in deployment pipeline from scratch.
- Expertise in any of the CI tools, preferably Jenkins.
- Good knowledge of docker containers and its usage.
- Using Infra/App Monitoring tools like, CloudWatch/Newrelic/Sensu.
Good to have:
- Knowledge of Ruby on Rails based applications and its deployment methodologies.
- Experience working on Container Orchestration tools like Kubernetes/ECS/Mesos.
- Extra Points For Experience With Front-end development NewRelic GCP Kafka, Elasticsearch.

