- Collaborate with enterprise architects and IT program managers to enhance existing business applications and facilitate solutions to meet business requirements
- Assist in architecting technical solutions for enterprise systems using the MuleSoft product suite.
- Design patterns for building middleware systems ground up using Message Routing, Content Enrichment, Message Filtering, Message Transformation, Guaranteed delivery, Message sequencing, Batch message processing, error handling and reconciliation mechanisms.
- Knowledge of Web Services Interoperability, WS-* standards and ability to suggest, criticize and formulate solutions in a multi-vendor and architecture committee meetings.
- Identify, analyze and design integration flows using Mule ESB Anypoint Studio and technically own and manage the process of ensuring on time and on budget build and integration of the various elements of the solution
- Good understanding of integration design patterns & best practices
- In depth experience using Agile, Scrum and iterative development practices
- Assist on technical POC's to prove out technology and ultimately leading into selection.

Similar jobs
Job title - DevOps Engineer
Experience - 4+ years
Location - Pune (Onsite)
Primary Skills - Kubernetes, AWS
Roles and Responsibilities:
Cloud Infrastructure Management: Design, deploy, and maintain AWS-based cloud infrastructure using best practices for scalability, security, and cost optimization.
Kubernetes & Containerization: Manage and orchestrate containerized applications using Kubernetes and Docker, ensuring efficient deployments and scaling.
CI/CD Pipelines: Develop, implement, and maintain continuous integration and continuous deployment (CI/CD) pipelines to enable fast, reliable software releases.
Automation: Automate infrastructure provisioning, configuration, and management using tools like Terraform, CloudFormation, or Ansible.
Monitoring & Logging: Set up and manage monitoring, logging, and alerting solutions (e.g., Prometheus, Grafana, CloudWatch, ELK Stack) to ensure the health and performance of applications and infrastructure.
Collaboration: Work closely with software engineers, system administrators, and other teams to identify bottlenecks, improve processes, and resolve issues.
Security: Implement and monitor security measures within the cloud infrastructure, ensuring compliance with industry standards and best practices (e.g., IAM, VPC, SSL/TLS).
Backup and Disaster Recovery: Set up and maintain disaster recovery plans, backup strategies, and ensure business continuity.
Performance Tuning: Continuously analyze and optimize application performance, ensuring fast and efficient operations.
DevOps Engineer - remote, Full time
Coincrowd is an innovative Fintech company. We offer a crypto platform for seamless payments, Crypto Vouchers, crypto trading, portfolio management, real time market data, breaking news and powerful analytics. Please visit https://coincrowd.com/ for more information.
Domain : Finance, Blockchain, Crypto
Job Description:
We're seeking a detail-oriented and proactive DevOps Engineer who has a strong background in Google Cloud Platform (GCP) environments. The ideal candidate will be comfortable operating in a fast-paced, dynamic startup environment, where they will have the opportunity to make substantial contributions.
Key Responsibilities:
- Develop, test, and maintain infrastructure on GCP.
- Automate infrastructure, application deployment, scaling, and management using Kubernetes and other similar tools.
- Collaborate with our software development team to ensure seamless deployment of software updates and enhancements.
- Monitor system performance and troubleshoot issues.
- Ensure high levels of performance, availability, sustainability, and security.
- Implement DevOps best practices, such as IAC (Infrastructure as Code).
Qualifications:
- Proven experience as a DevOps Engineer or similar role in software development and system administration.
- Strong experience with GCP (Google Cloud Platform), including Compute Engine, Cloud Functions, Cloud Storage, and other relevant GCP services.
- Knowledge of Kubernetes, Docker, Jenkins, or similar technologies.
- Familiarity with network protocols, firewalls, and VPN.
- Experience with scripting languages such as Python, Bash, etc.
- Understanding of Infrastructure as Code (IAC) tools, like Terraform or CloudFormation.
- Excellent problem-solving skills, attention to detail, and ability to work in a team.
What We Offer:
In recognition of your valuable contributions, you will receive an equity-based compensation package. Join our dynamic and innovative team in the rapidly evolving fintech industry and play a key role in shaping the future of Coincrowd's success.
If you're ready to be at the forefront of the Payment Technology revolution and have the vision and experience to drive sales growth in the crypto space, please join us in our mission to redefine fintech at Coincrowd.
- Configure, optimize, document, and support of the infrastructure components of software products (which are hosted in collocated facilities and cloud services such as AWS)
- Design and build tools and frameworks that support deployment and management and platforms
- Design, build, and deliver cloud computing solutions, hosted services, and underlying software infrastructures
- Build core functionality of our cloud-based platform product, deliver secure, reliable services and construct third party integrations
- Assist in coaching application developers on proper DevOps techniques for building scalable applications in the microservices paradigm
- Foster collaboration with software product development and architecture teams to ensure releases are delivered with repeatable and auditable processes
- Support and troubleshoot scalability, high availability, performance, monitoring, backup, and restores of different environments
- Work independently across multiple platforms and applications to understand dependencies
- Evaluate new tools, technologies, and processes to improve speed, efficiency, and scalability of continuous integration environments
- Design and architect solutions for existing client-facing applications as they are moved into cloud environments such as AWS
- Competencies
- Full understanding of scripting and automated process management in languages such as Shell, Ruby and/ or Python
- Working Knowledge SCM tools such as Git, GitHub, Bitbucket, etc.
- Working knowledge of Amazon Web Services and related APIs
- Ability to deliver and manage web or cloud-based services
- General familiarity with monitoring tools
- General familiarity with configuration/provisioning tools such as Terraform
- Experience
- Experience working within an Agile type environment
- 4+ years of experience with cloud-based provisioning (Azure, AWS, Google), monitoring, troubleshooting, and related DevOps technologies
- 4+ years of experience with containerization/orchestration technologies like Rancher, Docker and Kubernetes
Job Title: DevOps Engineer
Location: Remote
Type: Full-time
About Us:
At Tese, we are committed to advancing sustainability through innovative technology solutions. Our platform empowers SMEs, financial institutions, and enterprises to achieve their Environmental, Social, and Governance (ESG) goals. We are looking for a skilled and passionate DevOps Engineer to join our team and help us build and maintain scalable, reliable, and efficient infrastructure.
Role Overview:
As a DevOps Engineer, you will be responsible for designing, implementing, and managing the infrastructure that supports our applications and services. You will work closely with our development, QA, and data science teams to ensure smooth deployment, continuous integration, and continuous delivery of our products. Your role will be critical in automating processes, enhancing system performance, and maintaining high availability.
Key Responsibilities:
- Infrastructure Management:
- Design, implement, and maintain scalable cloud infrastructure on platforms such as AWS, Google Cloud, or Azure.
- Manage server environments, including provisioning, monitoring, and maintenance.
- CI/CD Pipeline Development:
- Develop and maintain continuous integration and continuous deployment pipelines using tools like Jenkins, GitLab CI/CD, or CircleCI.
- Automate deployment processes to ensure quick and reliable releases.
- Configuration Management and Automation:
- Implement infrastructure as code (IaC) using tools like Terraform, Ansible, or CloudFormation.
- Automate system configurations and deployments to improve efficiency and reduce manual errors.
- Monitoring and Logging:
- Set up and manage monitoring tools (e.g., Prometheus, Grafana, ELK Stack) to track system performance and troubleshoot issues.
- Implement logging solutions to ensure effective incident response and system analysis.
- Security and Compliance:
- Ensure systems are secure and compliant with industry standards and regulations.
- Implement security best practices, including identity and access management, network security, and vulnerability assessments.
- Collaboration and Support:
- Work closely with development and QA teams to support application deployments and troubleshoot issues.
- Provide support for infrastructure-related inquiries and incidents.
Qualifications:
- Education:
- Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
- Experience:
- 3-5 years of experience in DevOps, system administration, or related roles.
- Hands-on experience with cloud platforms such as AWS, Google Cloud Platform, or Azure.
- Technical Skills:
- Proficiency in scripting languages like Bash, Python, or Ruby.
- Strong experience with containerization technologies like Docker and orchestration tools like Kubernetes.
- Knowledge of configuration management tools (Ansible, Puppet, Chef).
- Experience with CI/CD tools (Jenkins, GitLab CI/CD, CircleCI).
- Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK Stack).
- Understanding of networking concepts and security best practices.
- Soft Skills:
- Strong problem-solving skills and attention to detail.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced environment and manage multiple tasks.
Preferred Qualifications:
- Experience with infrastructure as code (IaC) tools like Terraform or CloudFormation.
- Knowledge of microservices architecture and serverless computing.
- Familiarity with database administration (SQL and NoSQL databases).
- Experience with Agile methodologies and working in a Scrum or Kanban environment.
- Passion for sustainability and interest in ESG initiatives.
Benefits:
- Competitive salary and benefits package,and performance bonuses.
- Flexible working hours and remote work options.
- Opportunity to work on impactful projects that promote sustainability.
- Professional development opportunities, including access to training and conferences.

Requirements:
• Previously help a DevOps Engineer or System Engineer role
• 4+ years of production Linux system admin experience in high traffic environment
• 1+ years of experience with Amazon AWS and related services (instances, ELB,
EBS, S3, etc.) and abstractions on top of AWS.
• Strong understanding of network fundamentals, IP and related services (DNS, VPN, firewalls, etc.) and
security concerns.
• Experience in running Docker and Kubernetes clusters in production.
• Love automating mundane tasks and make developers life easy
• Must be able to code in, at a minimum, Python (or Ruby) and Bash.
• Non-trivial production experience with Saltstack and/or Puppet, Composer,Jenkins, GIT
• Agile software development best practices - continuous integration, releases,branches, etc.
• Experience with modern monitoring tools; capacity planning.
• Some experience with MySQL, PostgreSQL, ElasticSearch, Node.js, and PHP is a plus.
• Self-motivated, fast learner, detail-oriented, team player with a sense of humor
Experience in managing CI/CD using Jenkins.

- Hands-on experience building database-backed web applications using Python based frameworks
- Excellent knowledge of Linux and experience developing Python applications that are deployed in Linux environments
- Experience building client-side and server-side API-level integrations in Python
- Experience in containerization and container orchestration systems like Docker, Kubernetes, etc.
- Experience with NoSQL document stores like the Elastic Stack (Elasticsearch, Logstash, Kibana)
- Experience in using and managing Git based version control systems - Azure DevOps, GitHub, Bitbucket etc.
- Experience in using project management tools like Jira, Azure DevOps etc.
- Expertise in Cloud based development and deployment using cloud providers like AWS or Azure
Position Level: Senior Engineer
Company Overview:
AskSid.ai is a 4 years old start-up based in Bangalore, is fast growing and cofounded by
two ex-Mindtree employees each with 20+ years of experience. We were rated the No1
emerging SaaS company in India and won the NASSCOM EMERGE 50- League of 10
awards in 2019. Also got rated as the most innovative AI company in India for 2020 by
CII and Accenture Ventures. As a growing company, we are looking for passionate
engineers who aspire to build world class technology products of internet scale.
Job purpose:
Setup, optimize, and maintain Kubernetes clusters on Microsoft Azure Cloud.
Responsibilities
● Setup, maintain, optimize, and secure various Kubernetes clusters on MS Azure
Cloud
● Setup and maintain containers, container availability, auto-scalability, storage
management, DNS, Proxy setup and maintain firewall, app gateway, and load
balancers on MS Azure Cloud.
● Build and manage backup, restore, and DR activities
Knowledge and skills
Education and Experience
- Engineering in computer science
- 3-5 years of experience in setup and management of Kubernetes infrastructure
- Expert level skills in analytical & problem solving
- Ability to communicate clearly in English
- Microsoft Azure
- Kubernetes, AKS services as well as custom clusters on bare metal infrastructure
- Linux internals & services
- Docker, Docker Registry
- NGINX, Load Balancing, Firewall, Security, PKI
- Shell & Awk Script, Azure Templates & scripting, Python Scripting
- Knowledge of NoSQL Databases
Minimum 4 years exp
Skillsets:
- Build automation/CI: Jenkins
- Secure repositories: Artifactory, Nexus
- Build technologies: Maven, Gradle
- Development Languages: Python, Java, C#, Node, Angular, React/Redux
- SCM systems: Git, Github, Bitbucket
- Code Quality: Fisheye, Crucible, SonarQube
- Configuration Management: Packer, Ansible, Puppet, Chef
- Deployment: uDeploy, XLDeploy
- Containerization: Kubernetes, Docker, PCF, OpenShift
- Automation frameworks: Selenium, TestNG, Robot
- Work Management: JAMA, Jira
- Strong problem solving skills, Good verbal and written communication skills
- Good knowledge of Linux environment: RedHat etc.
- Good in shell scripting
- Good to have Cloud Technology : AWS, GCP and Azure
The mission of R&D IT Design Infrastructure is to offer a state-of-the-art design environment
for the chip hardware designers. The R&D IT design environment is a complex landscape of EDA Applications, High Performance Compute, and Storage environments - consolidated in five regional datacenters. Over 7,000 chip hardware designers, spread across 40+ locations around the world, use this state-of-the-art design environment to design new chips and drive the innovation of company. The following figures give an idea about the scale: the landscape has more 75,000+ cores, 30+ PBytes of data, and serves 2,000+ CAD applications and versions. The operational service teams are globally organized to cover 24/7 support to the chip hardware design and software design projects.
Since the landscape is really too complex to manage the traditional way, it is our strategy to transform our R&D IT design infrastructure into “software-defined datacenters”. This transformation entails a different way of work and a different mind-set (DevOps, Site Reliability Engineering) to ensure that our IT services are reliable. That’s why we are looking for a DevOps Linux Engineer to strengthen the team that is building a new on-premise software defined virtualization and containerization platform (PaaS) for our IT landscape, so that we can manage it with best practices from software engineering and offer the IT service reliability which is required by our chip hardware design community.
It will be your role to develop and maintain the base Linux OS images that are offered via automation to the customers of the internal (on-premise) cloud platforms.
Your responsibilities as DevOps Linux Engineer:
• Develop and maintain the base RedHat Linux operating system images
• Develop and maintain code to configure and test the base OS image
• Provide input to support the team to design, develop and maintain automation
products with playbooks (YAML) and modules (Python/PowerShell) in tools like
Ansible Tower and Service-Now
• Test and verify the code produced by the team (including your own) to continuously
improve and refactor
• Troubleshoot and solve incidents on the RedHat Linux operating system
• Work actively with other teams to align on the architecture of the PaaS solution
• Keep the Base OS image up2date via patches or make sure patches are available to
the virtual machine owners
• Train team members and others with your extensive automation knowledge
• Work together with ServiceNow developers in your team to provide the best intuitive
end-user experience possible for the virtual machine OS deployments
We are looking for a DevOps engineer/consultant with the following characteristics:
• Master or Bachelor degree
• You are a technical, creative, analytical and open-minded engineer that is eager to
learn and not afraid to take initiative.
• Your favorite t-shirt has “Linux” or “RedHat” printed on it at least once.
• Linux guru: You have great knowledge Linux servers (RedHat), RedHat Satellite 6
and other RedHat products
• Experience in Infrastructure services, e.g., Networking, DNS, LDAP, SMTP
• DevOps mindset: You are a team-player that is eager to develop and maintain cool
products to automate/optimize processes in a complex IT infrastructure and are able
to build and maintain productive working relationships
• You have great English communication skills, both verbally as in writing.
• No issue to work outside business hours to support the platform for critical R&D
Applications
Other competences we value, but are not strictly mandatory:
• Experience with agile development methods, like Scrum, and are convinced of its
power to deliver products with immense (business) value.
• “Security” is your middle name, and you are always challenging yourself and your
colleagues to design and develop new solutions as security tight as possible.
• Being a master in automation and orchestration with tools like Ansible Tower (or
comparable) and feel comfortable with developing new modules in Python or
PowerShell.
• It would be awesome if you are already a true Yoda when it comes to code version
control and branching strategies with Git, and preferably have worked with GitLab
before.
• Experience with automated testing in a CI/CD pipeline with Ansible, Python and tools
like Selenium.
• An enthusiast on cloud platforms like Azure & AWS.
• Background in and affinity with R&D
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
- Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Skills
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus years’ infrastructure and automation experience

