DevSecOps & Site Reliability Architect - AWS/Azure
- You will manage all elements of the post-sale program relationship with your customers, starting with customer on-boarding and continuing throughout the customer relationship.
- As the primary customer interface, you engage with customer teams to educate, identify needs, develop designs, set goals, manage and execute on plans that unlock continuous, incremental value from their investments in the CloudPassage Halo platform.
- You are hands-on during execution and thoroughly enjoy seeing your security projects come to life and supporting them afterwards. You are a trusted adviser.
Responsibilities :
- Manage a portfolio of 5+ Enterprise customer accounts with complex needs (typical enterprise customers invest between $500k and $4m+ per year with CloudPassage, have hundreds to tens of thousands of individual public cloud infrastructure deployments, and protect hundreds of thousands of cloud infrastructure assets with Halo).
- Provide level-3 technical support on your customer's most complex issues
- Lead implementation of low-level security controls in Cloud environments, for services, server and containers
- Remotely diagnose & resolve DevSecOps issues in customer environments - able to resolve their DevOps issues that may be interfering with CloudPassage processing.
- Interact with CloudPassage Engineering team by providing customer issue reproduction and data capture, technical diagnostics and validating fixes. QA experience preferred.
- Establish and program manage proactive, value-driven, high-touch relationships with your customers to understand, document and align customer strategies, business objectives, designs, processes and projects with Halo platform capabilities and broader CloudPassage services.
- Develop a trusted advisor relationship by building and maintaining appropriate relationships at all levels with your customer accounts, creating a premium and high-caliber experience.
- Ensure continued satisfaction, identify & confirm unaddressed customer needs that can be value-add opportunities for up-sell and cross-sell, and communicate those needs to the CloudPassage sales team. Identify any early CSAT issues and renewal risks and work with the internal team to remediate and ensure strong CSAT and a successful renewal.
- Be a strong customer advocate within CloudPassage and identify and support areas for improvement in the customer experience, both in our product and processes.
- Be team-oriented, but with a bias towards action to get things done for your customers.
Requirements : Strong cloud security knowledge & experience including :
- End-to-end enterprise security processes
- Cloud security - cloud migrations & shift in security requirements, tooling & approach
- Hands-on DevOps, DevSecOps architecture & automation (critical)
- 4+ years experience in security consulting and project/program management serving cybersecurity customers.
- Complex, level 3 technical support
- Remotely diagnosing & resolving DevSecOps issues in customer environments
- Interacting with CloudPassage Engineering team with customer issue reproduction
- Experience working in a security SaaS company in a startup environment.
- Experience working with Executive and C-Level teams.
- Ability to build and maintain strong relationships with internal and external constituents.
- Excellent organization, project management, time management, and communication skills.
- Understand and document customer requirements, map to product, track & report metrics, identify up-sell and cross-sell opportunities.
- Analytical both quantitatively and qualitatively.
- Excellent verbal and written communication skills.
- Security certifications (Security +, CISSP, etc.).
Expert Technical Skills :
- Consulting and project management : documenting project charters, project plans, executing delivery management, status reporting. Executive-level presentation skills.
- Security best practices expertise : software vulnerabilities, configuration management, intrusion detection, file integrity.
- System administration (including Linux and Windows) of cloud environments : AWS, Azure, GCP; strong networking/proxy skills.
Proficient Technical Skills :
- Configuration/Orchestration (Chef, Puppet, Ansible, SaltStack, CloudFormation, Terraform).
- CI/CD processes and environments.
Familiar Technical Skills & Knowledge : Python scripting & REST API's, Docker containers, Zendesk & JIRA.
About Intuitive Technology Partners
Similar jobs
Candidate must have a minimum of 8+ years of IT experience
IST time zone.
candidates should have hands-on experience in
DevOps (GitLab, Artifactory, SonarQube, AquaSec, Terraform & Docker / K8")
Thanks & Regards,
Anitha. K
TAG Specialist
Exp:8 to 10 years notice periods 0 to 20 days
Job Description :
- Provision Gcp Resources Based On The Architecture Design And Features Aligned With Business Objectives
- Monitor Resource Availability, Usage Metrics And Provide Guidelines For Cost And Performance Optimization
- Assist It/Business Users Resolving Gcp Service Related Issues
- Provide Guidelines For Cluster Automation And Migration Approaches And Techniques Including Ingest, Store, Process, Analyse And Explore/Visualise Data.
- Provision Gcp Resources For Data Engineering And Data Science Projects.
- Assistance With Automated Data Ingestion, Data Migration And Transformation(Good To Have)
- Assistance With Deployment And Troubleshooting Applications In Kubernetes.
- Establish Connections And Credibility In How To Address The Business Needs Via Design And Operate Cloud-Based Data Solutions
Key Responsibilities / Tasks :
- Building complex CI/CD pipelines for cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud Platform
- Building deployment pipeline with Github CI (Actions)
- Building terraform codes to deploy infrastructure as a code
- Working with deployment and troubleshooting of Docker, GKE, Openshift, and Cloud Run
- Working with Cloud Build, Cloud Composer, and Dataflow
- Configuring software to be monitored by Appdynamics
- Configuring stackdriver logging and monitoring in GCP
- Work with splunk, Kibana, Prometheus and grafana to setup dashboard
Your skills, experience, and qualification :
- Total experience of 5+ Years, in as Devops. Should have at least 4 year of experience in Google could and Github CI.
- Should have strong experience in Microservices/API.
- Should have strong experience in Devops tools like Gitbun CI, teamcity, Jenkins and Helm.
- Should know Application deployment and testing strategies in Google cloud platform.
- Defining and setting development, test, release, update, and support processes for DevOps operation
- Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
- Excellent understanding of Java
- Knowledge on Kafka, ZooKeeper, Hazelcast, Pub/Sub is nice to have.
- Understanding of cloud networking, security such as software defined networking/firewalls, virtual networks and load balancers.
- Understanding of cloud identity and access
- Understanding of the compute runtime and the differences between native compute, virtual and containers
- Configuration and managing databases such as Oracle, Cloud SQL, and Cloud Spanner.
- Excellent troubleshooting
- Working knowledge of various tools, open-source technologies
- Awareness of critical concepts of Agile principles
- Certification in Google professional Cloud DevOps Engineer is desirable.
- Experience with Agile/SCRUM environment.
- Familiar with Agile Team management tools (JIRA, Confluence)
- Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment, Courage)
- Good communication skills
- Pro-active team player
- Comfortable working in multi-disciplinary, self-organized teams
- Professional knowledge of English
- Differentiators : knowledge/experience about
Bachelor's degree in information security, computer science, or related.
A Strong Devops experience of at least 4+ years
Strong Experience in Unix/Linux/Python scripting
Strong networking knowledge,vSphere networking stack knowledge desired.
Experience on Docker and Kubernetes
Experience with cloud technologies (AWS/Azure)
Exposure to Continuous Development Tools such as Jenkins or Spinnaker
Exposure to configuration management systems such as Ansible
Knowledge of resource monitoring systems
Ability to scope and estimate
Strong verbal and communication skills
Advanced knowledge of Docker and Kubernetes.
Exposure to Blockchain as a Service (BaaS) like - Chainstack/IBM blockchain platform/Oracle Blockchain Cloud/Rubix/VMWare etc.
Capable of provisioning and maintaining local enterprise blockchain platforms for Development and QA (Hyperledger fabric/Baas/Corda/ETH).
About Navis
Hiring for a funded fintech startup based out of Bangalore!!!
Our Ideal Candidate
We are looking for a Senior DevOps engineer to join the engineering team and help us automate the build, release, packaging and infrastructure provisioning and support processes. The candidate is expected to own the full life-cycle of provisioning, configuration management, monitoring, maintenance and support for cloud as well as on-premise deployments.
Requirements
- 5-plus years of DevOps experience managing the Big Data application stack including HDFS, YARN, Spark, Hive and Hbase
- Deeper understanding of all the configurations required for installing and maintaining the infrastructure in the long run
- Experience setting up high availability, configuring resource allocation, setting up capacity schedulers, handling data recovery tasks
- Experience with middle-layer technologies including web servers (httpd, ningx), application servers (Jboss, Tomcat) and database systems (postgres, mysql)
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Experience maintaining and hardening the infrastructure by regularly applying required security packages and patches
- Experience supporting on-premise solutions as well as on AWS cloud
- Experience working with and supporting Spark-based applications on YARN
- Experience with one or more automation tools such as Ansible, Teraform, etc
- Experience working with CI/CD tools like Jenkins and various test report and coverage plugins
- Experience defining and automating the build, versioning and release processes for complex enterprise products
- Experience supporting clients remotely and on-site
- Experience working with and supporting Java- and Python-based tech stacks would be a plus
Desired Non-technical Requirements
- Very strong communication skills both written and verbal
- Strong desire to work with start-ups
- Must be a team player
Job Perks
- Attractive variable compensation package
- Flexible working hours – everything is results-oriented
- Opportunity to work with an award-winning organization in the hottest space in tech – artificial intelligence and advanced machine learning
Position: DevOps Lead
Job Description
● Research, evangelize and implement best practices and tools for GitOps, DevOps, continuous integration, build automation, deployment automation, configuration management, infrastructure as code.
● Develop software solutions to support DevOps tooling; including investigation of bug fixes, feature enhancements, and software/tools updates
● Participate in the full systems life cycle with solution design, development, implementation, and product support using Scrum and/or other Agile practices
● Evaluating, implementing, and streamlining DevOps practices.
● Design and drive the implementation of fully automated CI/CD pipelines.
● Designing and creating Cloud services and architecture for highly available and scalable environments. Lead the monitoring, debugging, and enhancing pipelines for optimal operation and performance. Supervising, examining, and handling technical operations.
Qualifications
● 5 years of experience in managing application development, software delivery lifecycle, and/or infrastructure development and/or administration
● Experience with source code repository management tools, code merge and quality checks, continuous integration, and automated deployment & management using tools like Bitbucket, Git, Ansible, Terraform, Artifactory, Service Now, Sonarqube, Selenium.
● Minimum of 4 years of experience with approaches and tooling for automated build, delivery, and release of the software
● Experience and/or knowledge of CI/CD tools: Jenkins, Bitbucket Pipelines, Gitlab CI, GoCD.
● Experience with Linux systems: CentOS, RHEL, Ubuntu, Secure Linux... and Linux Administration.
● Minimum of 4 years experience with managing medium/large teams including progress monitoring and reporting
● Experience and/or knowledge of Docker, Cloud, and Orchestration: GCP, AWS, Kubernetes.
● Experience and/or knowledge of system monitoring, logging, high availability, redundancy, autoscaling, and failover.
● Experience automating manual and/or repetitive processes.
● Experience and/or knowledge with networking and load balancing: Nginx, Firewall, IP network
About the Company
Blue Sky Analytics is a Climate Tech startup that combines the power of AI & Satellite data to aid in the creation of a global environmental data stack. Our funders include Beenext and Rainmatter. Over the next 12 months, we aim to expand to 10 environmental data-sets spanning water, land, heat, and more!
We are looking for DevOps Engineer who can help us build the infrastructure required to handle huge datasets on a scale. Primarily, you will work with AWS services like EC2, Lambda, ECS, Containers, etc. As part of our core development crew, you’ll be figuring out how to deploy applications ensuring high availability and fault tolerance along with a monitoring solution that has alerts for multiple microservices and pipelines. Come save the planet with us!
Your Role
- Applications built at scale to go up and down on command.
- Manage a cluster of microservices talking to each other.
- Build pipelines for huge data ingestion, processing, and dissemination.
- Optimize services for low cost and high efficiency.
- Maintain high availability and scalable PSQL database cluster.
- Maintain alert and monitoring system using Prometheus, Grafana, and Elastic Search.
Requirements
- 1-4 years of work experience.
- Strong emphasis on Infrastructure as Code - Cloudformation, Terraform, Ansible.
- CI/CD concepts and implementation using Codepipeline, Github Actions.
- Advanced hold on AWS services like IAM, EC2, ECS, Lambda, S3, etc.
- Advanced Containerization - Docker, Kubernetes, ECS.
- Experience with managed services like database cluster, distributed services on EC2.
- Self-starters and curious folks who don't need to be micromanaged.
- Passionate about Blue Sky Climate Action and working with data at scale.
Benefits
- Work from anywhere: Work by the beach or from the mountains.
- Open source at heart: We are building a community where you can use, contribute and collaborate on.
- Own a slice of the pie: Possibility of becoming an owner by investing in ESOPs.
- Flexible timings: Fit your work around your lifestyle.
- Comprehensive health cover: Health cover for you and your dependents to keep you tension free.
- Work Machine of choice: Buy a device and own it after completing a year at BSA.
- Quarterly Retreats: Yes there's work-but then there's all the non-work+fun aspect aka the retreat!
- Yearly vacations: Take time off to rest and get ready for the next big assignment by availing the paid leaves.
• Design cloud infrastructure that is secure, scalable, and highly available on AWS
• Define infrastructure and deployment requirements
• Provision, configure and maintain AWS cloud infrastructure defined as code
• Ensure configuration and compliance with configuration management tools
• Troubleshoot problems across a wide array of services and functional areas
• Build and maintain operational tools for deployment, monitoring, and analysis of AWS infrastructure and systems
• Perform infrastructure cost analysis and optimization
Qualifications:
• At least 3-5 years of experience building and maintaining AWS infrastructure (VPC, EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront, S3)
• Strong understanding of how to secure AWS environments and meet compliance requirements
• Expertise on configuration management
• Hands-on experience deploying and managing infrastructure with Terraform
• Solid foundation of networking and Linux administration
• Experience with Docker, GitHub, Jenkins, ELK and deploying applications on AWS
• Ability to learn/use a wide variety of open source technologies and tools
• Strong bias for action and ownership
- Have 3+ years of experience in Python development
- Be familiar with common database access patterns
- Have experience with designing systems and monitoring metrics, looking at graphs.
- Have knowledge of AWS, Kubernetes and Docker.
- Be able to work well in a remote development environment.
- Be able to communicate in English at a native speaking and writing level.
- Be responsible to your fellow remote team members.
- Be highly communicative and go out of your way to contribute to the team and help others
- 5+ years hands-on experience with designing, deploying and managing core AWS services and infrastructure
- Proficiency in scripting using Bash, Python, Ruby, Groovy, or similar languages
- Experience in source control management, specifically with Git
- Hands-on experience in Unix/Linux and bash scripting
- Experience building, managing Helm-based build and release CI-CD pipelines for Kubernetes platforms (EKS, Openshift, GKE)
- Strong experience with orchestration and config management tools such as Terraform, Ansible or Cloudformation
- Ability to debug, analyze issues leveraging tools like App Dynamics, New Relic and Sumologic
- Knowledge of Agile Methodologies and principles
- Good writing and documentation skills
- Strong collaborator with the ability to work well with core teammates and our colleagues across STS
PRAXINFO Hiring DevOps Engineer.
Position : DevOps Engineer
Job Location : C.G.Road, Ahmedabad
EXP : 1-3 Years
Salary : 40K - 50K
Required skills:
⦿ Good understanding of cloud infrastructure (AWS, GCP etc)
⦿ Hands on with Docker, Kubernetes or ECS
⦿ Ideally strong Linux background (RHCSA , RHCE)
⦿ Good understanding of monitoring systems (Nagios etc), Logging solutions (Elastisearch etc)
⦿ Microservice architectures
⦿ Experience with distributed systems and highly scalable systems
⦿ Demonstrated history in automating operations processes via services and tools ( Puppet, Ansible etc)
⦿ Systematic problem-solving approach coupled with a strong sense of ownership and drive.
If anyone is interested than share your resume at hiring at praxinfo dot com!
#linux #devops #engineer #kubernetes #docker #containerization #python #shellscripting #git #jenkins #maven #ant #aws #RHCE #puppet #ansible