- Public clouds, such as AWS, Azure, or Google Cloud Platform
- Automation technologies, such as Kubernetes or Jenkins
- Configuration management tools, such as Puppet or Chef
- Scripting languages, such as Python or Ruby
About codersbrain
Similar jobs
Key Responsibilities:-
• Collaborate with Data Scientists to test and scale new algorithms through pilots and later industrialize the solutions at scale to the comprehensive fashion network of the Group
• Influence, build and maintain the large-scale data infrastructure required for the AI projects, and integrate with external IT infrastructure/service to provide an e2e solution
• Leverage an understanding of software architecture and software design patterns to write scalable, maintainable, well-designed and future-proof code
• Design, develop and maintain the framework for the analytical pipeline
• Develop common components to address pain points in machine learning projects, like model lifecycle management, feature store and data quality evaluation
• Provide input and help implement framework and tools to improve data quality
• Work in cross-functional agile teams of highly skilled software/machine learning engineers, data scientists, designers, product managers and others to build the AI ecosystem within the Group
• Deliver on time, demonstrating a strong commitment to deliver on the team mission and agreed backlog
A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
Implementing various development, testing, automation tools, and IT infrastructure
Selecting and deploying appropriate CI/CD tools
Required Candidate profile
LinuxWorking knowledge of any webserver eg- NGINX or Apache
Hands on Experience with Linux administration
Experience using Python or Shell scripting (for Automation)
Hands-on experience with Implementation of CI/CD Processes
Experience working with one cloud platforms (AWS or Azure or Google)
Experience working with configuration management tools such as Ansible & Chef
Experience working with Containerization tool Docker.
Experience working with Container Orchestration tool Kubernetes.
Experience in source Control Management including SVN and/or Bitbucket
& GitHub
Experience with setup & management of monitoring tools like Nagios, Sensu & Prometheus or any other popular tools
Hands-on experience in Linux, Scripting Language & AWS is mandatory
Troubleshoot and Triage development, Production issues
- 7-10 years experience with secure SDLC/DevSecOps practices such as automating security processes within CI/CD pipeline.
- At least 4 yrs. experience designing, and securing Data Lake & Web applications deployed to AWS, Azure, Scripting/Automation skills on Python, Shell, YAML, JSON
- At least 4 years of hands-on experience with software development lifecycle, Agile project management (e.g. Jira, Confluence), source code management (e.g. Git), build automation (e.g. Jenkins), code linting and code quality (e.g. SonarQube), test automation (e.g. Selenium)
- Hand-on & Solid understanding of Amazon Web Services & Azure-based Infra & applications
- Experience writing cloud formation templates, Jenkins, Kubernetes, Docker, and microservice application architecture and deployment.
- Strong know-how on VA/PT integration in CI/CD pipeline.
- Experience in handling financial solutions & customer-facing applications
Roles
- Accelerate enterprise cloud adoption while enabling rapid and stable delivery of capabilities using continuous integration and continuous deployment principles, methodologies, and technologies
- Manage & deliver diverse cloud [AWS, Azure, GCP] DevSecOps journeys
- Identify, prototype, engineer, and deploy emerging software engineering methodologies and tools
- Maximize automation and enhance DevSecOps pipelines and other tasks
- Define and promote enterprise software engineering and DevSecOps standards, practices, and behaviors
- Operate and support a suite of enterprise DevSecOps services
- Implement security automation to decrease the loop between the development and deployment processes.
- Support project teams to adopt & integrate the DevSecOps environment
- Managing application vulnerabilities, Data security, encryption, tokenization, access management, Secure SDLC, SAST/DAST
- Coordinate with development and operations teams for practical automation solutions and custom flows.
- Own DevSecOps initiatives by providing objective, practical and relevant ideas, insights, and advice.
- Act as Release gatekeeper with an understanding of OWASP top 10 lists of vulnerabilities, NIST SP-800-xx, NVD, CVSS scoring, etc concepts
- Build workflows to ensure a successful DevSecOps journey for various enterprise applications.
- Understand the strategic direction to reach business goals across multiple projects & teams
- Collaborate with development teams to understand project deliverables and promote DevSecOps culture
- Formulate & deploy cloud automation strategies and tools
Skills
- Knowledge of the DevSecOps culture and principles.
- An understanding of cloud technologies & components
- A flair for programming languages such as Shell, Python, Java Scripts,
- Strong teamwork and communication skills.
- Knowledge of threat modeling and risk assessment techniques.
- Up-to-date knowledge of cybersecurity threats, current best practices, and the latest software.
- An understanding of programs such as Puppet, Chef, ThreatModeler, Checkmarx, Immunio, and Aqua.
- Strong know-how of Kubernetes, Docker, AWS, Azure-based deployments
- On the job learning for new programming languages, automation tools, deployment architectures
Cloud native technologies - Kubernetes (EKS, GKE, AKS), AWS ECS, Helm, CircleCI, Harness, Severless platforms (AWS Fargate etc.)
Infrastructure as Code tools - Terraform, CloudFormation, Ansible
Scripting - Python, Bash
Desired Skills & Experience:
Projects/Internships with coding experience in either of Javascript, Python, Golang, Java etc.
Hands-on scripting and software development fluency in any programming language (Python, Go, Node, Ruby).
Basic understanding of Computer Science fundamentals - Networking, Web Architecture etc.
Infrastructure automation experience with knowledge of at least a few of these tools: Chef, Puppet, Ansible, CloudFormation, Terraform, Packer, Jenkins etc.
Bonus points if you have contributed to open source projects, participated in competitive coding platforms like Hackerearth, CodeForces, SPOJ etc.
You’re willing to learn various new technologies and concepts. The “cloud-native” field of software is evolving fast and you’ll need to quickly learn new technologies as required.
Communication: You like discussing a plan upfront, welcome collaboration, and are an excellent verbal and written communicator.
B.E/B.Tech/M.Tech or equivalent experience.
Roles and Responsibilities
- 5 - 8 years of experience in Infrastructure setup on Cloud, Build/Release Engineering, Continuous Integration and Delivery, Configuration/Change Management.
- Good experience with Linux/Unix administration and moderate to significant experience administering relational databases such as PostgreSQL, etc.
- Experience with Docker and related tools (Cassandra, Rancher, Kubernetes etc.)
- Experience of working in Config management tools (Ansible, Chef, Puppet, Terraform etc.) is a plus.
- Experience with cloud technologies like Azure
- Experience with monitoring and alerting (TICK, ELK, Nagios, PagerDuty)
- Experience with distributed systems and related technologies (NSQ, RabbitMQ, SQS, etc.) is a plus
- Experience with scaling data store technologies is a plus (PostgreSQL, Scylla, Redis) is a plus
- Experience with SSH Certificate Authorities and Identity Management (Netflix BLESS) is a plus
- Experience with multi-domain SSL certs and provisioning a plus (Let's Encrypt) is a plus
- Experience with chaos or similar methodologies is a plus
• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment
• Design and implement monitoring solutions that identify both system bottlenecks and production issues
• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.
• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality
o Automating Infra creation
o Provide easy to use solutions to engineering team
• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices
o Update our processes and design new processes as needed.
o Establish DevOps Engineer team best practices.
o Stay current with industry trends and source new ways for our business to improve.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Manage timely resolution of all critical and/or complex problems
• Maintain, monitor, and establish best practices for containerized environments.
• Mentor new DevOps engineers
What you will bring
• The desire to work in fast-paced environment.
• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers
• Containerization experience with applications deployed on Docker and Kubernetes
• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability
• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)
• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)
• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations
o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.
• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
Good to have
• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure
• Good knowledge of REST/SOAP/JSON web service API implementation
•
● Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms
● Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset
● Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools
● Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers)
● Build container hosting-platform using Kubernetes
● Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value.
Skills Required:
● Excellent written and verbal communication skills and a good listener.
● Proficiency in deploying and maintaining Cloud based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them)
● Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks.
● Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform).
● Experience with relational SQL and NoSQL databases, including Postgres and
Cassandra.
● Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform)
● Strong Linux System Admin Experience with excellent troubleshooting and problem solving skills
● Hands-on experience with languages (Bash/Python/Core Java/Scala)
● Experience with CI/CD pipeline (Jenkins, Git, Maven etc)
● Experience integrating solutions in a multi-region environment
● Self-motivate, learn quickly and deliver results with minimal supervision
● Experience with Agile/Scrum/DevOps software development methodologies.
Nice to Have:
● Experience in setting-up Elastic Logstash Kibana (ELK) stack.
● Having worked with large scale data.
● Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc.
● Previously experience on working with distributed architectures like Hadoop, Mapreduce etc.