

Similar jobs

GCP Cloud Engineer:
- Proficiency in infrastructure as code (Terraform).
- Scripting and automation skills (e.g., Python, Shell). Knowing python is must.
- Collaborate with teams across the company (i.e., network, security, operations) to build complete cloud offerings.
- Design Disaster Recovery and backup strategies to meet application objectives.
- Working knowledge of Google Cloud
- Working knowledge of various tools, open-source technologies, and cloud services
- Experience working on Linux based infrastructure.
- Excellent problem-solving and troubleshooting skills
Key Qualifications :
- At least 2 years of hands-on experience with cloud infrastructure on AWS or GCP
- Exposure to configuration management and orchestration tools at scale (e.g. Terraform, Ansible, Packer)
- Knowledge in DevOps tools (e.g. Jenkins, Groovy, and Gradle)
- Familiarity with monitoring and alerting tools(e.g. CloudWatch, ELK stack, Prometheus)
- Proven ability to work independently or as an integral member of a team
Preferable Skills :
- Familiarity with standard IT security practices such as encryption, credentials and key management
- Proven ability to acquire various coding languages (Java, Python- ) to support DevOps operation and cloud transformation
- Familiarity in web standards (e.g. REST APIs, web security mechanisms)
- Multi-cloud management experience with GCP / Azure
- Experience in performance tuning, services outage management and troubleshooting
Responsibility :
- Install, configure, and maintain Kubernetes clusters.
- Develop Kubernetes-based solutions.
- Improve Kubernetes infrastructure.
- Work with other engineers to troubleshoot Kubernetes issues.
Kubernetes Engineer Requirements & Skills
- Kubernetes administration experience, including installation, configuration, and troubleshooting
- Kubernetes development experience
- Linux/Unix experience
- Strong analytical and problem-solving skills
- Excellent communication and interpersonal skills
- Ability to work independently and as part of a team

● Good understanding of how the web works
● Experience with at least one language like Java, Python etc
● Good with Shell scripting
● Experience with *Nix based operating systems
● Experience with k8s, containers
● Fairly good understanding of AWS/GCP/Azure
● Troubleshoot and fix outages and performance issues in infrastructure stack
● Identify gap and design automation tools for all feasible functions in infrastructure
● Good verbal and written communication skills
● Drive SLA/SLO of team
Benefits
This is an opportunity to work on a fairly complex set of systems and improve
them. You will get a chance to learn things like “how to think about code
simplicity”, “how to write for maintainability” and several other things.
● Comprehensive health insurance policy.
● Flexible working hours and a very friendly work environment.
● Flexibility to work either in the office (post Covid) or remotely.
Acceldata is creating the Data observability space. We make it possible for data-driven enterprises to effectively monitor, discover, and validate Data platforms at Petabyte scale. Our customers are Fortune 500 companies including Asia's largest telecom company, a unicorn fintech startup of India, and many more. We are lean, hungry, customer-obsessed, and growing fast. Our Solutions team values productivity, integrity, and pragmatism. We provide a flexible, remote-friendly work environment.
We are building software that can provide insights into companies' data operations and allows them to focus on delivering data reliably with speed and effectiveness. Join us in building an industry-leading data observability platform that focuses on ensuring data reliability from every spectrum (compute, data and pipeline) of a cloud or on-premise data platform.
Position Summary-
This role will support the customer implementation of a data quality and reliability product. The candidate is expected to install the product in the client environment, manage proof of concepts with prospects, and become a product expert and troubleshoot post installation, production issues. The role will have significant interaction with the client data engineering team and the candidate is expected to have good communication skills.
Required experience
- 6-7 years experience providing engineering support to data domain/pipelines/data engineers.
- Experience in troubleshooting data issues, analyzing end to end data pipelines and in working with users in resolving issues
- Experience setting up enterprise security solutions including setting up active directories, firewalls, SSL certificates, Kerberos KDC servers, etc.
- Basic understanding of SQL
- Experience working with technologies like S3, Kubernetes experience preferred.
- Databricks/Hadoop/Kafka experience preferred but not required
We are looking for an experienced DevOps (Development and Operations) professional to join our growing organization. In this position, you will be responsible for finding and reporting bugs in web and mobile apps & assist Sr DevOps to manage infrastructure projects and processes. Keen attention to detail, problem-solving abilities, and a solid knowledge base are essential.
As a DevOps, you will work in a Kubernetes based microservices environment.
Experience in Microsoft Azure cloud and Kubernetes is preferred, not mandatory.
Ultimately, you will ensure that our products, applications and systems work correctly.
Responsibilities:
- Detect and track software defects and inconsistencies
- Apply quality engineering principals throughout the Agile product lifecycle
- Handle code deployments in all environments
- Monitor metrics and develop ways to improve
- Consult with peers for feedback during testing stages
- Build, maintain, and monitor configuration standards
- Maintain day-to-day management and administration of projects
- Manage CI and CD tools with team
- Follow all best practices and procedures as established by the company
- Provide support and documentation
Required Technical and Professional Expertise
- Minimum 2+ years if DevOps
- Have experience in SaaS infrastructure development and Web Apps
- Experience in delivering microservices at scale; designing microservices solutions
- Proven Cloud experience/delivery of applications on Azure
- Proficient in configuration Management tools such as Ansible or any of Terraform Puppet, Chef, Salt, etc
- Hands-on experience in Networking/network configuration, Application performance monitoring, Container performance, and security.
- Understanding of Kubernetes, Python along with scripting languages like bash/shell
- Good to have experience in Linux internals, Linux packaging, Release Engineering (Branching, versioning, tagging), Artifact repository, Artifactory, Nexus, and CI/CD tooling (Concourse CI, Travis, Jenkins)
- Must be a proactive person
- You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies
- An ambitious individual who can work under their own direction towards agreed targets/goals and with a creative approach to work.
- An intuitive individual with an ability to manage change and proven time management
- Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
- 7-10 years experience with secure SDLC/DevSecOps practices such as automating security processes within CI/CD pipeline.
- At least 4 yrs. experience designing, and securing Data Lake & Web applications deployed to AWS, Azure, Scripting/Automation skills on Python, Shell, YAML, JSON
- At least 4 years of hands-on experience with software development lifecycle, Agile project management (e.g. Jira, Confluence), source code management (e.g. Git), build automation (e.g. Jenkins), code linting and code quality (e.g. SonarQube), test automation (e.g. Selenium)
- Hand-on & Solid understanding of Amazon Web Services & Azure-based Infra & applications
- Experience writing cloud formation templates, Jenkins, Kubernetes, Docker, and microservice application architecture and deployment.
- Strong know-how on VA/PT integration in CI/CD pipeline.
- Experience in handling financial solutions & customer-facing applications
Roles
- Accelerate enterprise cloud adoption while enabling rapid and stable delivery of capabilities using continuous integration and continuous deployment principles, methodologies, and technologies
- Manage & deliver diverse cloud [AWS, Azure, GCP] DevSecOps journeys
- Identify, prototype, engineer, and deploy emerging software engineering methodologies and tools
- Maximize automation and enhance DevSecOps pipelines and other tasks
- Define and promote enterprise software engineering and DevSecOps standards, practices, and behaviors
- Operate and support a suite of enterprise DevSecOps services
- Implement security automation to decrease the loop between the development and deployment processes.
- Support project teams to adopt & integrate the DevSecOps environment
- Managing application vulnerabilities, Data security, encryption, tokenization, access management, Secure SDLC, SAST/DAST
- Coordinate with development and operations teams for practical automation solutions and custom flows.
- Own DevSecOps initiatives by providing objective, practical and relevant ideas, insights, and advice.
- Act as Release gatekeeper with an understanding of OWASP top 10 lists of vulnerabilities, NIST SP-800-xx, NVD, CVSS scoring, etc concepts
- Build workflows to ensure a successful DevSecOps journey for various enterprise applications.
- Understand the strategic direction to reach business goals across multiple projects & teams
- Collaborate with development teams to understand project deliverables and promote DevSecOps culture
- Formulate & deploy cloud automation strategies and tools
Skills
- Knowledge of the DevSecOps culture and principles.
- An understanding of cloud technologies & components
- A flair for programming languages such as Shell, Python, Java Scripts,
- Strong teamwork and communication skills.
- Knowledge of threat modeling and risk assessment techniques.
- Up-to-date knowledge of cybersecurity threats, current best practices, and the latest software.
- An understanding of programs such as Puppet, Chef, ThreatModeler, Checkmarx, Immunio, and Aqua.
- Strong know-how of Kubernetes, Docker, AWS, Azure-based deployments
- On the job learning for new programming languages, automation tools, deployment architectures
- Administration and Support for Azure DevOps Server/Services
- Migration from Azure DevOps Server to Azure DevOps Services (SaaS)
- Process Template Customization and Deployment model
- Migration, Upgrade, Monitor, and Maintenance of ADS Instance
- Automation using REST API to build Extensions and Custom Reporting
- Expert in all Modules of Azure DevOps Server/Service (Work Item, SCM/VC, Build, Release, Test, Reporting Management)"
- CICD Orchestration tools and other SCM/VC tools
- Microsoft MCSD Application Lifecycle Management certified
- A bachelor or master degree with a minimum of 6 years relevant work experience in Azure DevOps Server/Services (SaaS)
- Good communication skills
- Strong knowledge of application lifecycle workflows and processes involved in the design, development, deployment, test, and maintenance of software systems in the Windows environment
- Visual Studio and the .NET Framework experience is required "
- Administration and Support for Azure DevOps Server/Services
- Migration from Azure DevOps Server to Azure DevOps Services (SaaS)
- Process Template Customization and Deployment model
- Work with the user community to adopt new features, enable new use cases, and help resolve any issues
- Create customizations and tools to help support the team’s needs (PM, Dev, Test, & Ops)
- Take the lead in the validation of the application.
- Monitor the health of the solution and take proactive steps to ensure reliable availability and performance
- Manage patches and updates for tooling solutions and related hosting environments including the operating system
- Automate the process for Maintenance"
- Works independently without any supervision
- Work on continuous improvement of the products through innovation and learning. Someone with a knack for benchmarking and optimization
- Experience in deploying highly complex, distributed transaction processing systems.
- Stay abreast with new innovations and the latest technology trends and explore ways of leveraging these for improving the product in alignment with the business.
- As a component owner, where the component impacts across multiple platforms (5-10-member team), work with customers to obtain their requirements and deliver the end-to-end project.
Required Experience, Skills, and Qualifications
- 5+ years of experience as a DevOps Engineer. Experience with the Golang cycle is a plus
- At least one End to End CI/CD Implementation experience
- Excellent Problem Solving and Debugging skills in DevOps area· Good understanding of Containerization (Docker/Kubernetes)
- Hands-on Build/Package tool experience· Experience with AWS services Glue, Athena, Lambda, EC2, RDS, EKS/ECS, ALB, VPC, SSM, Route 53
- Experience with setting up CI/CD pipeline for Glue jobs, Athena, Lambda functions
- Experience architecting interaction with services and application deployments on AWS
- Experience with Groovy and writing Jenkinsfile
- Experience with repository management, code scanning/linting, secure scanning tools
- Experience with deployments and application configuration on Kubernetes
- Experience with microservice orchestration tools (e.g. Kubernetes, Openshift, HashiCorp Nomad)
- Experience with time-series and document databases (e.g. Elasticsearch, InfluxDB, Prometheus)
- Experience with message buses (e.g. Apache Kafka, NATS)
- Experience with key-value stores and service discovery mechanisms (e.g. Redis, HashiCorp Consul, etc)

