Responsibilities
● Work with application development teams to identify and understand their operational pain points.
● Document these challenges and define goals to be achieved by the infrastructure team.
● Prototype and evaluate multiple solutions, often by experimenting with various vendors and tools available, to achieve the goals undertaken.
● Rollout tools and processes with heavy focus on automation.
● Evangelize and help onboard application development teams on the platforms provided by the infrastructure team.
● Co-own the responsibility with application development teams to ensure the reliability of services.
● Design and implement solutions around observability to ensure ease of maintenance and quick debugging of services
● Establish and implement administrative and operational best practices in the application development teams.
● Find avenues to reduce infrastructure costs and drive optimization in all services.
Qualifications
● 5+ years of experience as a DevOps / Infrastructure engineer with cloud platforms (preferably AWS)
● Experience with git, CI / CD, Docker, etc
● Experience in working with infrastructure as code (Terraform, etc).
● Strong Linux Shell scripting experience
● Experience with one of the programming languages like Python, Java, Kotlin, etc.
Similar jobs
What the role needs
● Review of current DevOps infrastructure & redefine code merging strategy as per product roll out objectives
● Define deploy frequency strategy based on product roadmap document and ongoing product market fit relate tweaks and changes
● Architect benchmark docker configurations based on planned stack
● Establish uniformity of environment across developer machine to multiple production environments
● Plan & execute test automation infrastructure
● Setup automated stress testing environment
● Plan and execute logging & stack trace tools
● Review DevOps orchestration tools & choices
● Coordination with external data centers and AWS in the event of provisioning, outages or maintenance.
Requirements
● Extensive experience with AWS cloud infrastructure deployment and monitoring
● Advanced knowledge of programming languages such as Python and golang, and writing code and scripts
● Experience with Infrastructure as code & devops management tools - Terraform, Packer for devops asset management for monitoring, infrastructure cost estimations, and Infrastructure version management
● Configure and manage data sources like MySQL, MongoDB, Elasticsearch, Redis, Cassandra, Hadoop, etc
● Experience with network, infrastructure and OWASP security standards
● Experience with web server configurations - Nginx, HAProxy, SSL configurations with AWS, understanding & management of sub-domain based product rollout for clients .
● Experience with deployment and monitoring of event streaming & distributing technologies and tools - Kafka, RabbitMQ, NATS.io, socket.io
● Understanding & experience of Disaster Recovery Plan execution
● Working with other senior team members to devise and execute strategies for data backup and storage
● Be aware of current CVEs, potential attack vectors, and vulnerabilities, and apply patches as soon as possible
● Handle incident responses, troubleshooting and fixes for various services
Candidate must have a minimum of 8+ years of IT experience
IST time zone.
candidates should have hands-on experience in
DevOps (GitLab, Artifactory, SonarQube, AquaSec, Terraform & Docker / K8")
Thanks & Regards,
Anitha. K
TAG Specialist
We are looking for a DevOps Lead to join our team.
Responsibilities
• A technology Professional who understands software development and can solve IT Operational and deployment challenges using software engineering tools and processes. This position requires an understanding of both Software development (Dev) and deployment
Operations (Ops)
• Identity manual processes and automate them using various DevOps automation tools
• Maintain the organization’s growing cloud infrastructure
• Monitor and maintain DevOps environment stability
• Collaborate with distributed Agile teams to define technical requirements and resolve technical design issues
• Orchestrating builds and test setups using Docker and Kubernetes.
• Participate in designing and building Kubernetes, Cloud, and on-prem environments for maximum performance, reliability and scalability
• Share business and technical learnings with the broader engineering and product organization, while adapting approaches for different audiences
Requirements
• Candidates working for this position should possess at least 5 years of work experience as a DevOps Engineer.
• Candidate should have experience in ELK stack, Kubernetes, and Docker.
• Solid experience in the AWS environment.
• Should have experience in monitoring tools like DataDog or Newrelic.
• Minimum of 5 years experience with code repository management, code merge and quality checks, continuous integration, and automated deployment & management using tools like Jenkins, SVN, Git, Sonar, and Selenium.
• Candidates must possess ample knowledge and experience in system automation, deployment, and implementation.
• Candidates must possess experience in using Linux, Jenkins, and ample experience in configuring and automating the monitoring tools.
• The candidates should also possess experience in the software development process and tools and languages like SaaS, Python, Java, MongoDB, Shell scripting, Python, PostgreSQL, and Git.
• Candidates should demonstrate knowledge in handling distributed data systems.
Examples: Elastisearch, Cassandra, Hadoop, and others.
• Should have experience in GitLab- CIRoles and Responsibilities
Job Responsibilities:
Section 1 -
- Responsible for managing and providing L1 support to Build, design, deploy and maintain the implementation of Cloud solutions on AWS.
- Implement, deploy and maintain development, staging & production environments on AWS.
- Familiar with serverless architecture and services on AWS like Lambda, Fargate, EBS, Glue, etc.
- Understanding of Infra as a code and familiar with related tools like Terraform, Ansible Cloudformation etc.
Section 2 -
- Managing the Windows and Linux machines, Kubernetes, Git, etc.
- Responsible for L1 management of Servers, Networks, Containers, Storage, and Databases services on AWS.
Section 3 -
- Timely monitoring of production workload alerts and quick addressing the issues
- Responsible for monitoring and maintaining the Backup and DR process.
Section 4 -
- Responsible for documenting the process.
- Responsible for leading cloud implementation projects with end-to-end execution.
Qualifications: Bachelors of Engineering / MCA Preferably with AWS, Cloud certification
Skills & Competencies
- Linux and Windows servers management and troubleshooting.
- AWS services experience on CloudFormation, EC2, RDS, VPC, EKS, ECS, Redshift, Glue, etc. - AWS EKS
- Kubernetes and containers knowledge
- Understanding of setting up AWS Messaging, streaming and queuing Services(MSK, Kinesis, SQS, SNS, MQ)
- Understanding of serverless architecture. - High understanding of Networking concepts
- High understanding of Serverless architecture concept - Managing to monitor and alerting systems
- Sound knowledge of Database concepts like Dataware house, Data Lake, and ETL jobs
- Good Project management skills
- Documentation skills
- Backup, and DR understanding
Soft Skills - Project management, Process Documentation
Ideal Candidate:
- AWS certification with between 2-4 years of experience with certification and project execution experience.
- Someone who is interested in building sustainable cloud architecture with automation on AWS.
- Someone who is interested in learning and being challenged on a day-to-day basis.
- Someone who can take ownership of the tasks and is willing to take the necessary action to get it done.
- Someone who is curious to analyze and solve complex problems.
- Someone who is honest with their quality of work and is comfortable with taking ownership of their success and failure, both.
Behavioral Traits
- We are looking for someone who is interested to be part of creativity and the innovation-based environment with other team members.
- We are looking for someone who understands the idea/importance of teamwork and individual ownership at the same time.
- We are looking for someone who can debate logically, respectfully disagree, and can admit if proven wrong and who can learn from their mistakes and grow quickly
Type, Location
Full Time @ Anywhere in India
Desired Experience
2+ years
Job Description
What You’ll Do
● Deploy, automate and maintain web-scale infrastructure with leading public cloud vendors such as Amazon Web Services, Digital Ocean & Google Cloud Platform.
● Take charge of DevOps activities for CI/CD with the latest tech stacks.
● Acquire industry-recognized, professional cloud certifications (AWS/Google) in the capacity of developer or architect Devise multi-region technical solutions.
● Implementing the DevOps philosophy and strategy across different domains in organisation.
● Build automation at various levels, including code deployment to streamline release process
● Will be responsible for architecture of cloud services
● 24*7 monitoring of the infrastructure
● Use programming/scripting in your day-to-day work
● Have shell experience - for example Powershell on Windows, or BASH on *nix
● Use a Version Control System, preferably git
● Hands on at least one CLI/SDK/API of at least one public cloud ( GCP, AWS, DO)
● Scalability, HA and troubleshooting of web-scale applications.
● Infrastructure-As-Code tools like Terraform, CloudFormation
● CI/CD systems such as Jenkins, CircleCI
● Container technologies such as Docker, Kubernetes, OpenShift
● Monitoring and alerting systems: e.g. NewRelic, AWS CloudWatch, Google StackDriver, Graphite, Nagios/ICINGA
What you bring to the table
● Hands on experience in Cloud compute services, Cloud Function, Networking, Load balancing, Autoscaling.
● Hands on with GCP/AWS Compute & Networking services i.e. Compute Engine, App Engine, Kubernetes Engine, Cloud Function, Networking (VPC, Firewall, Load Balancer), Cloud SQL, Datastore.
● DBs: Postgresql, MySQL, Elastic Search, Redis, kafka, MongoDB or other NoSQL systems
● Configuration management tools such as Ansible/Chef/Puppet
Bonus if you have…
● Basic understanding of Networking(routing, switching, dns) and Storage
● Basic understanding of Protocol such as UDP/TCP
● Basic understanding of Cloud computing
● Basic understanding of Cloud computing models like SaaS, PaaS
● Basic understanding of git or any other source code repo
● Basic understanding of Databases(sql/no sql)
● Great problem solving skills
● Good in communication
● Adaptive to learning
ApnaComplex is one of India’s largest and fastest-growing PropTech disruptors within the Society & Apartment Management business. The SaaS based B2C platform is headquartered out of India’s tech start-up hub, Bangalore, with branches in 6 other cities. It currently empowers 3,600 Societies, managing over 6 Lakh Households in over 80 Indian cities to effortlessly manage all aspects of running large complexes seamlessly.
ApnaComplex is part of ANAROCK Group. ANAROCK Group is India's leading specialized real estate services company having diversified interests across the real estate value chain.
If it excites you to - drive innovation, create industry-first solutions, build new capabilities ground-up, and work with multiple new technologies, ApnaComplex is the place for you.
Must have-
- Knowledge of Docker
- Knowledge of Terraforms
- Knowledge of AWS
Good to have -
- Kubernetes
- Scripting language: PHP/Go Lang and Python
- Webserver knowledge
- Logging and monitoring experience
- Test, build, design, deployment, and ability to maintain continuous integration and continuous delivery process using tools like Jenkins, maven Git, etc.
- Build and maintain highly available production systems.
- Must know how to choose the best tools and technologies which best fits the business needs.
- Develop software to integrate with internal back-end systems.
- Investigate and resolve technical issues.
- Problem-solving attitude.
- Ability to automate test and deploy the code and monitor.
- Work in close coordination with the development and operations team such that the application is in line with performance according to the customer's expectation.
- Lead and guide the team in identifying and implementing new technologies.
Skills that will help you build a success story with us
- An ability to quickly understand and solve new problems
- Strong interpersonal skills
- Excellent data interpretation
- Context-switching
- Intrinsically motivated
- A tactical and strategic track record for delivering research-driven results
Quick Glances:
- https://www.apnacomplex.com/why-apnacomplex">What to look for at ApnaComplex
- https://www.linkedin.com/company/1070467/admin/">Who are we A glimpse of ApnaComplex, know us better
- https://www.apnacomplex.com/media-buzz">ApnaComplex - Media – Visit our media page
ANAROCK Ethos - Values Over Value:
Our assurance of consistent ethical dealing with clients and partners reflects our motto - Values Over Value.
We value diversity within ANAROCK Group and are committed to offering equal opportunities in employment. We do not discriminate against any team member or applicant for employment based on nationality, race, color, religion, caste, gender identity / expression, sexual orientation, disability, social origin and status, indigenous status, political opinion, age, marital status or any other personal characteristics or status. ANAROCK Group values all talent and will do its utmost to hire, nurture and grow them.
We are hiring for https://www.linkedin.com/feed/hashtag/?keywords=devops&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Devops Engineer for a reputed https://www.linkedin.com/feed/hashtag/?keywords=mnc&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#MNC
Job Description:
Total exp- 6+Years
Must have:
Minimum 3-4 years hands-on experience in https://www.linkedin.com/feed/hashtag/?keywords=kubernetes&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Kubernetes and https://www.linkedin.com/feed/hashtag/?keywords=docker&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Docker
Proficiency in https://www.linkedin.com/feed/hashtag/?keywords=aws&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#AWS Cloud
Good to have Kubernetes admin certification
Job Responsibilities:
Responsible for managing Kubernetes cluster
Deploying infrastructure for the project
Build https://www.linkedin.com/feed/hashtag/?keywords=cicd&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#CICD pipeline
Looking for https://www.linkedin.com/feed/hashtag/?keywords=immediate&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Immediate Joiners only
Location: Pune
Salary: As per market standards
Mode: https://www.linkedin.com/feed/hashtag/?keywords=work&highlightedUpdateUrns=urn%3Ali%3Aactivity%3A7003255016740294656" target="_blank">#Work from office
Azure DeVops
On premises to Azure Migration
Docker, Kubernetes
Terraform CI CD pipeline
9+ Location
Budget – BG, Hyderabad, Remote , Hybrid –
Budget – up to 30 LPA
At Neurosensum we are committed to make customer feedback more actionable. We have developed a platform called SurveySensum which breaks the conventional market research turnaround time.
SurveySensum is becoming a great tool to not only capture the feedbacks but also to extract some useful insights with the quick workflow setups and dashboards. We have more than 7 channels through which we can collect the feedbacks. This makes us challenge the conventional software development design principles. The team likes to grind and helps each other to lift in tough situations.
Day to day responsibilities include:
- Work on the deployment of code via Bitbucket, AWS CodeDeploy and manual
- Work on Linux/Unix OS and Multi tech application patching
- Manage, coordinate, and implement software upgrades, patches, and hotfixes on servers.
- Create and modify scripts or applications to perform tasks
- Provide input on ways to improve the stability, security, efficiency, and scalability of the environment
- Easing developers’ life so that they can focus on the business logic rather than deploying and maintaining it.
- Managing release of the sprint.
- Educating team of the best practices.
- Finding ways to avoid human error and save time by automating the processes using Terraform, CloudFormation, Bitbucket pipelines, CodeDeploy, scripting
- Implementing cost effective measure on cloud and minimizing existing costs.
Skills and prerequisites
- OOPS knowledge
- Problem solving nature
- Willing to do the R&D
- Works with the team and support their queries patiently
- Bringing new things on the table - staying updated
- Pushing solution above a problem.
- Willing to learn and experiment
- Techie at heart
- Git basics
- Basic AWS or any cloud platform – creating and managing ec2, lambdas, IAM, S3 etc
- Basic Linux handling
- Docker and orchestration (Great to have)
- Scripting – python (preferably)/bash
Skill: Python, Docker or Ansible , AWS
➢ Experience Building a multi-region highly available auto-scaling infrastructure that optimizes
performance and cost. plan for future infrastructure as well as Maintain & optimize existing
infrastructure.
➢ Conceptualize, architect and build automated deployment pipelines in a CI/CD environment like
Jenkins.
➢ Conceptualize, architect and build a containerized infrastructure using Docker,Mesosphere or
similar SaaS platforms.
Work with developers to institute systems, policies and workflows which allow for rollback of
deployments Triage release of applications to production environment on a daily basis.
➢ Interface with developers and triage SQL queries that need to be executed inproduction
environments.
➢ Maintain 24/7 on-call rotation to respond and support troubleshooting of issues in production.
➢ Assist the developers and on calls for other teams with post mortem, follow up and review of
issues affecting production availability.
➢ Establishing and enforcing systems monitoring tools and standards
➢ Establishing and enforcing Risk Assessment policies and standards
➢ Establishing and enforcing Escalation policies and standards