Expert troubleshooting skills.
Expertise in designing highly secure cloud services and cloud infrastructure using AWS
(EC2, RDS, S3, ECS, Route53)
Experience with DevOps tools including Docker, Ansible, Terraform.
• Experience with monitoring tools such as DataDog, Splunk.
Experience building and maintaining large scale infrastructure in AWS including
experience leveraging one or more coding languages for automation.
Experience providing 24X7 on call production support.
Understanding of best practices, industry standards and repeatable, supportable
processes.
Knowledge and working experience of container-based deployments such as Docker,
Terraform, AWS ECS.
of TCP/IP, DNS, Certs & Networking Concepts.
Knowledge and working experience of the CI/CD development pipeline and experience
of the CI/CD maturity model. (Jenkins)
Knowledge and working experience
Strong core Linux OS skills, shell scripting, python scripting.
Working experience of modern engineering operations duties, including providing the
necessary tools and infrastructure to support high performance Dev and QA teams.
Database, MySQL administration skills is a plus.
Prior work in high load and high-traffic infrastructure is a plus.
Clear vision of and commitment to providing outstanding customer service.

About Quicken Inc
Similar jobs
- Hands-on knowledge on various CI-CD tools (Jenkins/TeamCity, Artifactory, UCD, Bitbucket/Github, SonarQube) including setting up of build-deployment automated pipelines.
- Very good knowledge in scripting tools and languages such as Shell, Perl or Python , YAML/Groovy, build tools such as Maven/Gradle.
- Hands-on knowledge in containerization and orchestration tools such as Docker, OpenShift and Kubernetes.
- Good knowledge in configuration management tools such as Ansible, Puppet/Chef and have worked on setting up of monitoring tools (Splunk/Geneos/New Relic/Elk).
- Expertise in job schedulers/workload automation tools such as Control-M or AutoSys is good to have.
- Hands-on knowledge on Cloud technology (preferably GCP) including various computing services and infrastructure setup using Terraform.
- Should have basic understanding on networking, certificate management, Identity and Access Management and Information security/encryption concepts.
- • Should support day-to-day tasks related to platform and environments upkeep such as upgrades, patching, migration and system/interfaces integration.
- • Should have experience in working in Agile based SDLC delivery model, multi-task and support multiple systems/apps.
- • Big-data and Hadoop ecosystem knowledge is good to have but not mandatory.
- Should have worked on standard release, change and incident management tools such as ServiceNow/Remedy or similar
Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus
Profile: DevOps Engineer
Experience: 5-8 Yrs
Notice Period: Immediate to 30 Days
Job Descrtiption:
Technical Experience (Must Have):
Cloud: Azure
DevOps Tool: Terraform, Ansible, Github, CI-CD pipeline, Docker, Kubernetes
Network: Cloud Networking
Scripting Language: Any/All - Shell Script, PowerShell, Python
OS: Linux (Ubuntu, RHEL etc)
Database: MongoDB
Professional Attributes: Excellent communication, written, presentation,
and problem-solving skills.
Experience: Minimum of 5-8 years of experience in Cloud Automation and
Application
Additional Information (Good to have):
Microsoft Azure Fundamentals AZ-900
Terraform Associate
Docker
Certified Kubernetes Administrator
Role:
Building and maintaining tools to automate application and
infrastructure deployment, and to monitor operations.
Design and implement cloud solutions which are secure, scalable,
resilient, monitored, auditable and cost optimized.
Implementing transformation from an as is state, to the future.
Coordinating with other members of the DevOps team, Development, Test,
and other teams to enhance and optimize existing processes.
Provide systems support, implement monitoring and logging alerting
solutions that enable the production systems to be monitored.
Writing Infrastructure as Code (IaC) using Industry standard tools and
services.
Writing application deployment automation using industry standard
deployment and configuration tools.
Design and implement continuous delivery pipelines that serve the
purpose of provisioning and operating client test as well as production
environments.
Implement and stay abreast of Cloud and DevOps industry best practices
and tooling.
Experienced with Azure DevOps, CI/CD and Jenkins.
Experience is needed in Kubernetes (AKS), Ansible, Terraform, Docker.
Good understanding in Azure Networking, Azure Application Gateway, and other Azure components.
Experienced Azure DevOps Engineer ready for a Senior role or already at a Senior level.
Demonstrable experience with the following technologies:
Microsoft Azure Platform As A Service (PaaS) product such as Azure SQL, AppServices, Logic Apps, Functions and other Serverless services.
Understanding of Microsoft Identity and Access Management products such including Azure AD or AD B2C.
Microsoft Azure Operational and Monitoring tools, including Azure Monitor, App Insights and Log Analytics.
Knowledge of PowerShell, GitHub, ARM templates, version controls/hotfix strategy and deployment automation.
Ability and desire to quickly pick up new technologies, languages, and tools
Excellent communication skills and Good team player.
Passionate about code quality and best practices is an absolute must
Must show evidence of your passion for technology and continuous learning
Roles and Responsibilities
- Primary stakeholder collaborating with Dir Engineering on software/infrastructure architecture, monitoring/alerting framework and all other architectural level technical issues
- Design and manage implementation of Silvermine’s high performance, scalable, extensible and resilient microservices application stack based of existing, partially migrated monolithic application and for new product development. Includes:
- Utilizing either ECS Fargate (no EC2 clusters) or EKS as the orchestration framework – to be tested up to a minimum of 100k concurrent users
- Exploring, designing and implementing use of on demand compute (Lambda) where appropriate
- Scalable and redundant data architecture supporting microservices design principles
- A scalable reverse proxy layer to isolate microservices from managing network connections
- Utilizing CDN capabilities to offload origin load via an intelligent caching strategy
- Leveraging best in breed AWS service offerings to enable team to focus on application stack instead of application scaffolding while minimizing operational complexity and cost
- Monitoring and optimizing of stack for
- Security and monitoring
- Leverage AWS and 3rd party services to monitor the application stack and data; secure them from DDOS attacks and security breaches; and alert the team in the vent of an incident
- Using APM and logging tools:
- Monitor application stack and infrastructure component performance
- Proactively detect, triage and mitigate stack performance issues
- Alert upon exception events
- Provide triaging tools for debugging and Root Cause Analysis.
- Enhance the CI/CD pipeline to support automated testing, a resilient deployment model (e.g., blue-green, canary) and 100% rollback support (including the data layer)
- Development a comprehensive, supportable, repeatable IAC implementation using CloudFormation or Terraform
- Take a leadership role and exhibit expertise in the development of standards, architectural governance, design patterns, best practices and optimization of existing architecture.
- Partner with teams and leaders to provide strategic consultation for business process design/optimization, creating strategic technology road maps, performing rapid prototyping and implementing technical solutions to accelerate the fulfillment of the business strategic vision.
- Staying up to date on emerging technologies (AI, Automation, Cloud etc.) and trends with a clear focus on productivity, ease of use and fit-for-purpose, by researching, testing, and evaluating.
- Providing POCs and product implementation guidelines.
- Applying imagination and innovation by creating, inventing, and implementing new or better approaches, alternatives and breakthrough ideas that are valued by customers within the function.
- Assessing current state of solutions, defining future state needs, identifying gaps and recommending new technology solutions and strategic business execution improvements.
- Overseeing and facilitating the evaluation and selection technology, product standards and the design of standard configurations/implementation patterns.
- Partnering with other architects and solution owners to create standards and set strategies for the enterprise.
- Communicating directly with business colleagues on applying digital workplace technologies to solve identified business challenges.
Skills Required:
- Good mentorship skills to coach and guide the team on AWS DevOps.
- Jenkins, Python, Pipeline as Code, Cloud Formation Templates and Terraform.
- Experience with Dockers, Containers, Lambda and Fargate is a must
- Experience with CI/CD and Release management
- Strong proficiency in PowerShell scripting
- Demonstrable expertise in Java
- Familiarity with REST APIs
Qualifications:
- Minimum of 5 years of relevant experience in Devops.
- Bachelors or Masters in Computer Science or equivalent degree.
- AWS Certifications is added advantage
Karkinos Healthcare Pvt. Ltd.
The fundamental principle of Karkinos healthcare is democratization of cancer care in a participatory fashion with existing health providers, researchers and technologists. Our vision is to provide millions of cancer patients with affordable and effective treatments and have India become a leader in oncology research. Karkinos will be with the patient every step of the way, to advise them, connect them to the best specialists, and to coordinate their care.
Karkinos has an eclectic founding team with strong technology, healthcare and finance experience, and a panel of eminent clinical advisors in India and abroad.
Roles and Responsibilities:
- Critical role that involves in setting up and owning the dev, staging, and production infrastructure for the platform that uses micro services, data warehouses and a datalake.
- Demonstrate technical leadership with incident handling and troubleshooting.
- Provide software delivery operations and application release management support, including scripting, automated build and deployment processing and process reengineering.
- Build automated deployments for consistent software releases with zero downtime
- Deploy new modules, upgrades and fixes to the production environment.
- Participate in the development of contingency plans including reliable backup and restore procedures.
- Participate in the development of the end to end CI / CD process and follow through with other team members to ensure high quality and predictable delivery
- Participate in development of CI / CD processes
- Work on implementing DevSecOps and GitOps practices
- Work with the Engineering team to integrate more complex testing into a containerized pipeline to ensure minimal regressions
- Build platform tools that rest of the engineering teams can use.
Apply only if you have:
- 2+ years of software development/technical support experience.
- 1+ years of software development, operations experience deploying and maintaining multi-tiered infrastructure and applications at scale.
- 2+ years of experience in public cloud services: AWS (VPC, EC2, ECS, Lambda, Redshift, S3, API Gateway) or GCP (Kubernetes Engine, Cloud SQL, Cloud Storage, BIG Query, API Gateway, Container Registry) - preferably in GCP.
- Experience managing infra for distributed NoSQL system (Kafka/MongoDB), Containers, Micro services, deployment and service orchestration using Kubernetes.
- Experience and a god understanding of Kubernetes, Service Mesh (Istio preferred), API Gateways, Network proxies, etc.
- Experience in setting up infra for central monitoring of infrastructure, ability to debug, trace
- Experience and deep understanding of Cloud Networking and Security
- Experience in Continuous Integration and Delivery (Jenkins / Maven Github/Gitlab).
- Strong scripting language knowledge, such as Python, Shell.
- Experience in Agile development methodologies and release management techniques.
- Excellent analytical and troubleshooting.
- Ability to continuously learn and make decisions with minimal supervision. You understand that making mistakes means that you are learning.
Interested Applicants can share their resume at sajal.somani[AT]karkinos[DOT]in with subject as "DevOps Engineer".
Position Summary
DevOps is a Department of Horizontal Digital, within which we have 3 different practices.
- Cloud Engineering
- Build and Release
- Managed Services
This opportunity is for Cloud Engineering role who also have some experience with Infrastructure migrations, this will be a complete hands-on job, with focus on migrating clients workloads to the cloud, reporting to the Solution Architect/Team Lead and along with that you are also expected to work on different projects for building out the Sitecore Infrastructure from scratch.
We are Sitecore Platinum Partner and majority of the Infrastructure work that we are doing is for Sitecore.
Sitecore is a .Net Based Enterprise level Web CMS, which can be deployed on On-Prem, IaaS, PaaS and Containers.
So, most of our DevOps work is currently planning, architecting and deploying infrastructure for Sitecore.
Key Responsibilities:
- This role includes ownership of technical, commercial and service elements related to cloud migration and Infrastructure deployments.
- Person who will be selected for this position will ensure high customer satisfaction delivering Infra and migration projects.
- Candidate must expect to work in parallel across multiple projects, along with that candidate must also have a fully flexible approach to working hours.
- Candidate should keep him/herself updated with the rapid technological advancements and developments that are taking place in the industry.
- Along with that candidate should also have a know-how on Infrastructure as a code, Kubernetes, AKS/EKS, Terraform, Azure DevOps, CI/CD Pipelines.
Requirements:
- Bachelor’s degree in computer science or equivalent qualification.
- Total work experience of 6 to 8 Years.
- Total migration experience of 4 to 6 Years.
- Multiple Cloud Background (Azure/AWS/GCP)
- Implementation knowledge of VMs, Vnet,
- Know-how of Cloud Readiness and Assessment
- Good Understanding of 6 R's of Migration.
- Detailed understanding of the cloud offerings
- Ability to Assess and perform discovery independently for any cloud migration.
- Working Exp. on Containers and Kubernetes.
- Good Knowledge of Azure Site Recovery/Azure Migrate/Cloud Endure
- Understanding on vSphere and Hyper-V Virtualization.
- Working experience with Active Directory.
- Working experience with AWS Cloud formation/Terraform templates.
- Working Experience of VPN/Express route/peering/Network Security Groups/Route Table/NAT Gateway, etc.
- Experience of working with CI/CD tools like Octopus, Teamcity, Code Build, Code Deploy, Azure DevOps, GitHub action.
- High Availability and Disaster Recovery Implementations, taking into the consideration of RTO and RPO aspects.
- Candidates with AWS/Azure/GCP Certifications will be preferred.
MTX Group Inc. is seeking a motivated Lead DevOps Engineer to join our team. MTX Group Inc. is a global implementation partner enabling organizations to become fit enterprises. MTX provides expertise across various platforms and technologies, including Google Cloud, Salesforce, artificial intelligence/machine learning, data integration, data governance, data quality, analytics, visualization and mobile technology. MTX’s very own Artificial Intelligence platform Maverick, enables clients to accelerate processes and critical decisions by leveraging a Cognitive Decision Engine, a collection of purpose-built Artificial Neural Networks designed to leverage the power of Machine Learning. The Maverick Platform includes Smart Asset Detection and Monitoring, Chatbot Services, Document Verification, to name a few.
Responsibilities:
- Be responsible for software releases, configuration, monitoring and support of production system components and infrastructure.
- Troubleshoot technical or functional issues in a complex environment to provide timely resolution, with various applications and platforms that are global.
- Bring experience on Google Cloud Platform.
- Write scripts and automation tools in languages such as Bash/Python/Ruby/Golang.
- Configure and manage data sources like PostgreSQL, MySQL, Mongo, Elasticsearch, Redis, Cassandra, Hadoop, etc
- Build automation and tooling around Google Cloud Platform using technologies such as Anthos, Kubernetes, Terraform, Google Deployment Manager, Helm, Cloud Build etc.
- Bring a passion to stay on top of DevOps trends, experiment with and learn new CI/CD technologies.
- Work with users to understand and gather their needs in our catalogue. Then participate in the required developments
- Manage several streams of work concurrently
- Understand how various systems work
- Understand how IT operations are managed
What you will bring:
- 5 years of work experience as a DevOps Engineer.
- Must possess ample knowledge and experience in system automation, deployment, and implementation.
- Must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
- Experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, MySQL, and Git.
- Knowledge in handling distributed data systems. Examples: Elasticsearch, Cassandra, Hadoop, and others.
What we offer:
- Group Medical Insurance (Family Floater Plan - Self + Spouse + 2 Dependent Children)
- Sum Insured: INR 5,00,000/-
- Maternity cover upto two children
- Inclusive of COVID-19 Coverage
- Cashless & Reimbursement facility
- Access to free online doctor consultation
- Personal Accident Policy (Disability Insurance) -
- Sum Insured: INR. 25,00,000/- Per Employee
- Accidental Death and Permanent Total Disability is covered up to 100% of Sum Insured
- Permanent Partial Disability is covered as per the scale of benefits decided by the Insurer
- Temporary Total Disability is covered
- An option of Paytm Food Wallet (up to Rs. 2500) as a tax saver benefit
- Monthly Internet Reimbursement of upto Rs. 1,000
- Opportunity to pursue Executive Programs/ courses at top universities globally
- Professional Development opportunities through various MTX sponsored certifications on multiple technology stacks including Salesforce, Google Cloud, Amazon & others
*******************

