
Manager - Information Security
at HDFC Life is one of India's leading and most valuable private life insurance company

Similar jobs
Key Qualifications :
- At least 2 years of hands-on experience with cloud infrastructure on AWS or GCP
- Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
- Knowledge in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
- Familiarity with monitoring and alerting tools(e.g. CloudWatch, ELK stack, Prometheus)
- Proven ability to work independently or as an integral member of a team
Preferable Skills :
- Familiarity with standard IT security practices such as encryption, credentials and key management
- Proven ability to acquire various coding languages (Java, Python- ) to support DevOps operation and cloud transformation
- Familiarity in web standards (e.g. REST APIs, web security mechanisms)
- Multi-cloud management experience with GCP / Azure
- Experience in performance tuning, services outage management and troubleshooting


We are now seeking a talented and motivated individual to contribute to our product in the Cloud data
protection space. Ability to clearly comprehend customer needs in a cloud environment, excellent
troubleshooting skills, and the ability to focus on problem resolution until completion are a requirement.
Responsibilities Include:
Review proposed feature requirements
Create test plan and test cases
Analyze performance, diagnosis, and troubleshooting
Enter and track defects
Interact with customers, partners, and development teams
Researching customer issues and product initiatives
Provide input for service documentation
Required Skills:
Bachelor's degree in Computer Science, Information Systems or related discipline
3+ years' experience inclusive of Software as a Service and/or DevOps engineering experience
Experience with AWS services like VPC, EC2, RDS, SES, ECS, Lambda, S3, ELB
Experience with technologies such as REST, Angular, Messaging, Databases, etc.
Strong troubleshooting skills and issue isolation skills
Possess excellent communication skills (written and verbal English)
Must be able to work as an individual contributor within a team
Ability to think outside the box
Experience in configuring infrastructure
Knowledge of CI / CD
Desirable skills:
Programming skills in scripting languages (e.g., python, bash)
Knowledge of Linux administration
Knowledge of testing tools/frameworks: TestNG, Selenium, etc
Knowledge of Identity and Security
Do Your Thng
DYT - Do Your Thing, is an app, where all social media users can share brands they love with their followers and earn money while doing so! We believe everyone is an influencer. Our aim is to democratise social media and allow people to be rewarded for the content they post. How does DYT help you? It accelerates your career through collaboration opportunities with top brands and gives you access to a community full of experts in the influencer space.
Role: DevOps
Job Description:
We are looking for experienced DevOps Engineers to join our Engineering team. The candidate will be working with our engineers and interact with the tech team for high quality web applications for a product.
Required Experience
- Devops Engineer with 2+ years of experience in development and production operations supporting for Linux & Windows based applications and Cloud deployments (AWS/GC stack)
- Experience working with Continuous Integration and Continuous Deployment Pipeline
- Exposure to managing LAMP stack-based applications
- Experience Resource provisioning automation using tools such as CloudFormation, terraform and ARM Templates.
- Experience in working closely with clients, understanding their requirements, design and implement quality solutions to meet their needs.
- Ability to take ownership on the carried-out work
- Experience coordinating with rest of the team to deliver well-architected and high-quality solutions.
- Experience deploying Docker based applications
- Experience with AWS services.
- Excellent verbal and written communication skills
Desired Experience
- Exposure to AWS, google cloud and Azure Cloud
- Experience in Jenkins, Ansible, Terraform
- Build Monitoring tools and respond to alarms triggered in production environment
- Willingness to quickly become a member of the team and to do what it takes to get the job done
- Ability to work well in a fast-paced environment and listen and learn from stakeholders
- Demonstrate a strong work ethic and incorporate company values in your everyday work.
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com
As part of the engineering team, you would be expected to have
deep technology expertise with a passion for building highly scalable products.
This is a unique opportunity where you can impact the lives of people across 150+
countries!
Responsibilities
• Develop Collaborate in large-scale systems design discussions.
• Deploying and maintaining in-house/customer systems ensuring high availability,
performance and optimal cost.
• Automate build pipelines. Ensuring right architecture for CI/CD
• Work with engineering leaders to ensure cloud security
• Develop standard operating procedures for various facets of Infrastructure
services (CI/CD, Git Branching, SAST, Quality gates, Auto Scaling)
• Perform & automate regular backups of servers & databases. Ensure rollback and
restore capabilities are Realtime and with zero-downtime.
• Lead the entire DevOps charter for ONE Championship. Mentor other DevOps
engineers. Ensure industry standards are followed.
Requirements
• Overall 5+ years of experience in as DevOps Engineer/Site Reliability Engineer
• B.E/B.Tech in CS or equivalent streams from institute of repute
• Experience in Azure is a must. AWS experience is a plus
• Experience in Kubernetes, Docker, and containers
• Proficiency in developing and deploying fully automated environments using
Puppet/Ansible and Terraform
• Experience with monitoring tools like Nagios/Icinga, Prometheus, AlertManager,
Newrelic
• Good knowledge of source code control (git)
• Expertise in Continuous Integration and Continuous Deployment setup using Azure
Pipeline or Jenkins
• Strong experience in programming languages. Python is preferred
• Experience in scripting and unit testing
• Basic knowledge of SQL & NoSQL databases
• Strong Linux fundamentals
• Experience in SonarQube, Locust & Browserstack is a plus
Implement DevOps capabilities in cloud offerings using CI/CD toolsets and automation
Defining and setting development, test, release, update, and support processes for DevOps
operation
Troubleshooting techniques and fixing the code bugs
Coordination and communication within the team and with client team
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous
development, and constant deployment pipeline (CI/CD Pipeline)
Pre-requisite skills required:
Experience working on Linux based infrastructure
Experience of scripting in at-least 2 languages ( Bash + Python / Ruby )
Working knowledge of various tools, open-source technologies, and cloud services
Experience with Docker, AWS ( ec2, s3, iam, eks, route53), Ansible, Helm, Terraform
Experience with building, maintaining, and deploying Kubernetes environments and
applications
Experience with build and release automation and dependency management; implementing
CI/CD
Clear fundamentals with DNS, HTTP, HTTPS, Micro-Services, Monolith etc.
We are looking for a Sr. Engineer DevOps and SysOps, who is responsible for managing AWS and Azure cloud computing. Your primary focus would be to help multiple projects with various cloud service implementation, create and manage CI/CD pipelines for deployment, explore new services on cloud and help projects to implement them.
Technical Requirements & Responsibilities
- Have 4+ years’ experience as a DevOps and SysOps Engineer.
- Apply cloud computing skills to deploy upgrades and fixes on AWS and Azure (GCP is optional / Good to have).
- Design, develop, and implement software integrations based on user feedback.
- Troubleshoot production issues and coordinate with the development team to streamline code deployment.
- Implement automation tools and frameworks (CI/CD pipelines).
- Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of projects.
- Collaborate with team members to improve the company’s engineering tools, systems and procedures, and data security.
- Optimize the company’s computing architecture.
- Conduct systems tests for security, performance, and availability.
- Develop and maintain design and troubleshooting documentation.
- Expert in code deployment tools (Puppet, Ansible, and Chef).
- Can maintain Java / PHP / Ruby on Rail / DotNet web applications.
- Experience in network, server, and application-status monitoring.
- Possess a strong command of software-automation production systems (Jenkins and Selenium).
- Expertise in software development methodologies.
- You have working knowledge of known DevOps tools like Git and GitHub.
- Possess a problem-solving attitude.
- Can work independently and as part of a team.
Soft Skills Requirements
- Strong communication skills
- Agility and quick learner
- Attention to detail
- Organizational skills
- Understanding of the Software development life cycle
- Good Analytical and problem-solving skills
- Self-motivated with the ability to prioritize, meet deadlines, and manage changing priorities
- Should have a high level of energy working as an individual contributor and as a part of team.
- Good command over verbal and written English communication
- 2+ years of demonstrable experience leading site reliability and performance in large-scale, high-traffic environments
- 2+ years of hands-on experience as a DevOps engineer
- Strong leadership, communication and interpersonal skills geared to getting things done
- Developing themselves and the talent within their charge – fostering and creating opportunity for the team
- Strong understanding of SRE concepts and the DevOps culture. Set the direction and strategy for your team, and help shape the overall SRE program for the company
- Be able to lead complicated technical issues and communicating status updates/RCA with management and customers.
- Own site stability, performance, capacity planning, DevOps recruitment.

