Expert troubleshooting skills.
Expertise in designing highly secure cloud services and cloud infrastructure using AWS
(EC2, RDS, S3, ECS, Route53)
Experience with DevOps tools including Docker, Ansible, Terraform.
• Experience with monitoring tools such as DataDog, Splunk.
Experience building and maintaining large scale infrastructure in AWS including
experience leveraging one or more coding languages for automation.
Experience providing 24X7 on call production support.
Understanding of best practices, industry standards and repeatable, supportable
processes.
Knowledge and working experience of container-based deployments such as Docker,
Terraform, AWS ECS.
of TCP/IP, DNS, Certs & Networking Concepts.
Knowledge and working experience of the CI/CD development pipeline and experience
of the CI/CD maturity model. (Jenkins)
Knowledge and working experience
Strong core Linux OS skills, shell scripting, python scripting.
Working experience of modern engineering operations duties, including providing the
necessary tools and infrastructure to support high performance Dev and QA teams.
Database, MySQL administration skills is a plus.
Prior work in high load and high-traffic infrastructure is a plus.
Clear vision of and commitment to providing outstanding customer service.

About Quicken Inc
Similar jobs
Position: DevOps Engineer / Senior DevOps Engineer
Experience: 3 to 6 Years
Key Skills: AWS, Terraform, Docker, Kubernetes, DevSecOps pipeline
Job Description:
- AWS Infrastructure: Architect, deploy, and manage AWS services like EC2, S3, RDS, Lambda, SageMaker, API Gateway, and VPC.
- Networking: Proficient in subnetting, endpoints, NACL, security groups, VPC flow logs, and routing.
- API Management: Design and manage secure, scalable APIs using AWS API Gateway.
- CI/CD Pipelines: Build and maintain CI/CD pipelines with AWS CodePipeline, CodeBuild, and CodeDeploy.
- Automation & IaC: Use Terraform and CloudFormation for automating infrastructure management.
- Containerization & Kubernetes: Expertise in Docker, Kubernetes, and managing containerized deployments.
- Monitoring & Logging: Implement monitoring with AWS CloudWatch, CloudTrail, and other tools.
- Security: Apply AWS security best practices using IAM, KMS, Secrets Manager, and GuardDuty.
- Cost Management: Monitor and optimize AWS usage and costs.
- Collaboration: Partner with development, QA, and operations teams to enhance productivity and system reliability.

we’d love to speak with you. Skills and Qualifications:
Strong experience with continuous integration/continuous deployment (CI/CD) pipeline tools such as Jenkins, TravisCI, or GitLab CI.
Proficiency in scripting languages such as Python, Bash, or Ruby.
Knowledge of infrastructure automation tools such as Ansible, Puppet, or Terraform.
Experience with cloud platforms such as AWS, Azure, or GCP.
Knowledge of container orchestration tools such as Docker, Kubernetes, or OpenShift.
Experience with version control systems such as Git.
Familiarity with Agile methodologies and practices.
Understanding of networking concepts and principles.
Knowledge of database technologies such as MySQL, MongoDB, or PostgreSQL.
Good understanding of security and data protection principles.
Roles and responsibilities:
● Building and setting up new development tools and infrastructure
● Working on ways to automate and improve development and release processes
● Deploy updates and fixes
● Helping to ensure information security best practices
● Provide Level 2 technical support
● Perform root cause analysis for production errors
● Investigate and resolve technical issues
About The Role:
The products/services of Eclat Engineering Pvt. Ltd. are being used by some of the leading institutions in India and abroad. Our services/Products are rapidly growing in demand. We are looking for a capable and dynamic Senior DevOps engineer to help setup, maintain and scale the infrastructure operations. This Individual will have the challenging responsibility of channelling our IT infrastructure and offering customer services with stringent international standard levels of service quality. This individual will leverage the latest IT tools to automate and streamline the delivery of our services while implementing industry-standard processes and knowledge management.
Roles & Responsibilities:
- Infrastructure and Deployment Automation: Design, implement, and maintain automation for infrastructure
provisioning and application deployment. Own the CI/CD pipelines and ensure they are efficient, reliable, and
scalable.
- System Monitoring and Performance: -Take ownership of monitoring systems and ensure the health and
performance of the infrastructure. Proactively identify and address performance bottlenecks and system issues.
- Cloud Infrastructure Management: Manage cloud infrastructure (e.g., AWS, Azure, GCP) and optimize resource
usage. Implement cost-saving measures while maintaining scalability and reliability.
- Configuration Management: Manage configuration management tools (e.g., Ansible, Puppet, Chef) to ensure
consistency across environments. Automate configuration changes and updates.
- Security and Compliance: Own security policies, implement best practices, and ensure compliance with industry
standards. Lead efforts to secure infrastructure and applications, including patch management and access controls.
- Collaboration with Development and Operations Teams: Foster collaboration between development and
operations teams, promoting a DevOps culture. Be the go-to person for resolving cross-functional infrastructure
issues and improving the development process.
- Disaster Recovery and Business Continuity: Develop and maintain disaster recovery plans and procedures. Ensure
business continuity in the event of system failures or other disruptions.
- Documentation and Knowledge Sharing: Create and maintain comprehensive documentation for configurations,
processes, and best practices. Share knowledge and mentor junior team members.
- Technical Leadership and Innovation: Stay up-to-date with industry trends and emerging technologies. Lead efforts
to introduce new tools and technologies that enhance DevOps practices.
- Problem Resolution and Troubleshooting: Be responsible for diagnosing and resolving complex issues related to
infrastructure and deployments. Implement preventive measures to reduce recurring problems.
Requirements:
● B.E / B.Tech / M.E / M.Tech / MCA / M.Sc.IT (if not should be able to demonstrate required skills)
● Overall 3+ years of experience in DevOps and Cloud operations specifically in AWS.
● Experience with Linux Administrator
● Experience with microservice architecture, containers, Kubernetes, and Helm is a must
● Experience in Configuration Management preferably Ansible
● Experience in Shell Scripting is a must
● Experience in developing and maintaining CI/CD processes using tools like Gitlab, Jenkins
● Experience in logging, monitoring and analytics
● An Understanding of writing Infrastructure as a Code using tools like Terraform
● Preferences - AWS, Kubernetes, Ansible
Must Have:
● Knowledge of AWS Cloud Platform.
● Good experience with microservice architecture, Kubernetes, helm and container-based technologies
● Hands-on experience with Ansible.
● Should have experience in working and maintaining CI/CD Processes.
● Hands-on experience in version control tools like GIT.
● Experience with monitoring tools such as Cloudwatch/Sysdig etc.
● Sound experience in administering Linux servers and Shell Scripting.
● Should have a good understanding of IT security and have the knowledge to secure production environments (OS and server software).
Job Title: Data Architect - Azure DevOps
Job Location: Mumbai (Andheri East)
About the company:
MIRACLE HUB CLIENT, is a predictive analytics and artificial intelligence company headquartered in Boston, US with offices across the globe. We build prediction models and algorithms to solve high priority business problems. Working across multiple industries, we have designed and developed breakthrough analytic products and decision-making tools by leveraging predictive analytics, AI, machine learning, and deep domain expertise.
Skill-sets Required:
- Design Enterprise Data Models
- Azure Data Specialist
- Security and Risk
- GDPR and other compliance knowledge
- Scrum/Agile
Job Role:
- Design and implement effective database solutions and models to store and retrieve company data
- Examine and identify database structural necessities by evaluating client operations, applications, and programming.
- Assess database implementation procedures to ensure they comply with internal and external regulations
- Install and organize information systems to guarantee company functionality.
- Prepare accurate database design and architecture reports for management and executive teams.
Desired Candidate Profile:
- Bachelor’s degree in computer science, computer engineering, or relevant field.
- A minimum of 3 years’ experience in a similar role.
- Strong knowledge of database structure systems and data mining.
- Excellent organizational and analytical abilities.
- Outstanding problem solver.
- IMMEDIATE JOINING (A notice period of 1 month is also acceptable)
- Excellent English communication and presentation skills, both verbal and written
- Charismatic, competitive and enthusiastic personality with negotiation skills
Compensation: NO BAR .
We are a boutique IT services & solutions firm headquartered in the Bay Area with offices in India. Our offering includes custom-configured hybrid cloud solutions backed by our managed services. We combine best in class DevOps and IT infrastructure management practices, to manage our clients Hybrid Cloud Environments.
In addition, we build and deploy our private cloud solutions using Open stack to provide our clients with a secure, cost effective and scale able Hybrid Cloud solution. We work with start-ups as well as enterprise clients.
This is an exciting opportunity for an experienced Cloud Engineer to work on exciting projects and have an opportunity to expand their knowledge working on adjacent technologies as well.
Must have skills
• Provisioning skills on IaaS cloud computing for platforms such as AWS, Azure, GCP.
• Strong working experience in AWS space with various AWS services and implementations (i.e. VPCs, SES, EC2, S3, Route 53, Cloud Front, etc.)
• Ability to design solutions based on client requirements.
• Some experience with various network LAN/WAN appliances like (Cisco routers and ASA systems, Barracuda, Meraki, SilverPeak, Palo Alto, Fortinet, etc.)
• Understanding of networked storage like (NFS / SMB / iSCSI / Storage GW / Windows Offline)
• Linux / Windows server installation, maintenance, monitoring, data backup and recovery, security, and administration.
• Good knowledge of TCP/IP protocol & internet technologies.
• Passion for innovation and problem solving, in a start-up environment.
• Good communication skills.
Good to have
• Remote Monitoring & Management.
• Familiarity with Kubernetes and Containers.
• Exposure to DevOps automation scripts & experience with tools like Git, bash scripting, PowerShell, AWS Cloud Formation, Ansible, Chef or Puppet will be a plus.
• Architect / Practitioner certification from OEM with hands-on capabilities.
What you will be working on
• Trouble shoot and handle L2/ L3 tickets.
• Design and architect Enterprise Cloud systems and services.
• Design, Build and Maintain environments primarily in AWS using EC2, S3/Storage, CloudFront, VPC, ELB, Auto Scaling, Direct Connect, Route53, Firewall, etc.
• Build and deploy in GCP/ Azure as needed.
• Architect cloud solutions keeping performance, cost and BCP considerations in mind.
• Plan cloud migration projects as needed.
• Collaborate & work as part of a cohesive team.
• Help build our private cloud offering on Open stack.
Looking for an experienced candidate with strong development and programming experience, knowledge preferred-
- Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
- Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
- Proficient with Unix systems and bash
- Proficient with git/GitHub/GitLab/bitbucket
Desired skills-
- Docker
- Kubernetes
- Jenkins
- Experience in any scripting language (Phyton, Shell Scripting, Java Script)
- NGINX / Load Balancer
- Splunk / ETL tools

Bachelor's degree in information security, computer science, or related.
A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
About Navis

DevOps Architect
Experience: 10 - 12+ year relevant experience on DevOps
Locations : Bangalore, Chennai, Pune, Hyderabad, Jaipur.
Qualification:
• Bachelors or advanced degree in Computer science, Software engineering or equivalent is required.
• Certifications in specific areas are desired
Technical Skillset: Skills Proficiency level
- Build tools (Ant or Maven) - Expert
- CI/CD tool (Jenkins or Github CI/CD) - Expert
- Cloud DevOps (AWS CodeBuild, CodeDeploy, Code Pipeline etc) or Azure DevOps. - Expert
- Infrastructure As Code (Terraform, Helm charts etc.) - Expert
- Containerization (Docker, Docker Registry) - Expert
- Scripting (linux) - Expert
- Cluster deployment (Kubernetes) & maintenance - Expert
- Programming (Java) - Intermediate
- Application Types for DevOps (Streaming like Spark, Kafka, Big data like Hadoop etc) - Expert
- Artifactory (JFrog) - Expert
- Monitoring & Reporting (Prometheus, Grafana, PagerDuty etc.) - Expert
- Ansible, MySQL, PostgreSQL - Intermediate
• Source Control (like Git, Bitbucket, Svn, VSTS etc)
• Continuous Integration (like Jenkins, Bamboo, VSTS )
• Infrastructure Automation (like Puppet, Chef, Ansible)
• Deployment Automation & Orchestration (like Jenkins, VSTS, Octopus Deploy)
• Container Concepts (Docker)
• Orchestration (Kubernetes, Mesos, Swarm)
• Cloud (like AWS, Azure, GoogleCloud, Openstack)
Roles and Responsibilities
• DevOps architect should automate the process with proper tools.
• Developing appropriate DevOps channels throughout the organization.
• Evaluating, implementing and streamlining DevOps practices.
• Establishing a continuous build environment to accelerate software deployment and development processes.
• Engineering general and effective processes.
• Helping operation and developers teams to solve their problems.
• Supervising, Examining and Handling technical operations.
• Providing a DevOps Process and Operations.
• Capacity to handle teams with leadership attitude.
• Must possess excellent automation skills and the ability to drive initiatives to automate processes.
• Building strong cross-functional leadership skills and working together with the operations and engineering teams to make sure that systems are scalable and secure.
• Excellent knowledge of software development and software testing methodologies along with configuration management practices in Unix and Linux-based environment.
• Possess sound knowledge of cloud-based environments.
• Experience in handling automated deployment CI/CD tools.
• Must possess excellent knowledge of infrastructure automation tools (Ansible, Chef, and Puppet).
• Hand on experience in working with Amazon Web Services (AWS).
• Must have strong expertise in operating Linux/Unix environments and scripting languages like Python, Perl, and Shell.
• Ability to review deployment and delivery pipelines i.e., implement initiatives to minimize chances of failure, identify bottlenecks and troubleshoot issues.
• Previous experience in implementing continuous delivery and DevOps solutions.
• Experience in designing and building solutions to move data and process it.
• Must possess expertise in any of the coding languages depending on the nature of the job.
• Experience with containers and container orchestration tools (AKS, EKS, OpenShift, Kubernetes, etc)
• Experience with version control systems a must (GIT an advantage)
• Belief in "Infrastructure as a Code"(IaaC), including experience with open-source tools such as terraform
• Treats best practices for security as a requirement, not an afterthought
• Extensive experience with version control systems like GitLab and their use in release management, branching, merging, and integration strategies
• Experience working with Agile software development methodologies
• Proven ability to work on cross-functional Agile teams
• Mentor other engineers in best practices to improve their skills
• Creating suitable DevOps channels across the organization.
• Designing efficient practices.
• Delivering comprehensive best practices.
• Managing and reviewing technical operations.
• Ability to work independently and as part of a team.
• Exceptional communication skills, be knowledgeable about the latest industry trends, and highly innovative
● Responsible for design, development, and implementation of Cloud solutions.
● Responsible for achieving automation & orchestration of tools(Puppet/Chef)
● Monitoring the product's security & health(Datadog/Newrelic)
● Managing and Maintaining databases(Mongo & Postgres)
● Automating Infrastructure using AWS services like CloudFormation
● Participating in Infrastructure Security Audits
● Migrating to Container technologies (Docker/Kubernetes)
● Should be able to work on serverless concepts (AWS Lambda)
● Should be able to work with AWS services like EC2, S3, Cloud-formation, EKS, IAM, RDS, ..etc
What you bring:
● Problem-solving skills that enable you to identify the best solutions.
● Team collaboration and flexibility at work.
● Strong verbal and written communication skills that will help in presenting complex ideas
in
● an accessible and engaging way.
● Ability to choose the best tools and technologies which best fits the business needs.
Aviso offers:
● Dynamic, diverse, inclusive startup environment driven by transparency and velocity
● Bright, open, sunny working environment and collaborative office space
● Convenient office locations in Redwood City, Hyderabad and Bangalore tech hubs
● Competitive salaries and company equity, and a focus on developing world class talent operations
● Comprehensive health insurance available (medical) for you and your family
● Unlimited leaves with manager approval and a 3 month paid sabbatical after 3 years of service
● CEO moonshots projects with cash awards every quarter
● Upskilling and learning support including via paid conferences, online courses, and certifications
● Every month Rupees 2,500 will be credited to Sudexo meal card

