Position- Cloud and Infrastructure Automation Consultant
Location- India(Pan India)-Work from Home
The position:
This exciting role in Ashnik’s consulting team brings great opportunity to design and deploy automation solutions for Ashnik’s enterprise customers spread across SEA and India. This role takes a lead in consulting the customers for automation of cloud and datacentre based resources. You will work hands-on with your team focusing on infrastructure solutions and to automate infrastructure deployments that are secure and compliant. You will provide implementation oversight of solutions to over the challenges of technology and business.
Responsibilities:
· To lead the consultative discussions to identify challenges for the customers and suggest right fit open source tools
· Independently determine the needs of the customer and create solution frameworks
· Design and develop moderately complex software solutions to meet needs
· Use a process-driven approach in designing and developing solutions.
· To create consulting work packages, detailed SOWs and assist sales team to position them to enterprise customers
· To be responsible for implementation of automation recipes (Ansible/CHEF) and scripts (Ruby, PowerShell, Python) as part of an automated installation/deployment process
Experience and skills required :
· 8 to 10 year of experience in IT infrastructure
· Proven technical skill in designing and delivering of enterprise level solution involving integration of complex technologies
· 6+ years of experience with RHEL/windows system automation
· 4+ years of experience using Python and/or Bash scripting to solve and automate common system tasks
· Strong understanding and knowledge of networking architecture
· Experience with Sentinel Policy as Code
· Strong understanding of AWS and Azure infrastructure
· Experience deploying and utilizing automation tools such as Terraform, CloudFormation, CI/CD pipelines, Jenkins, Github Actions
· Experience with Hashicorp Configuration Language (HCL) for module & policy development
· Knowledge of cloud tools including CloudFormation, CloudWatch, Control Tower, CloudTrail and IAM is desirable
This role requires high degree of self-initiative, working with diversified teams and working with customers spread across Southeast Asia and India region. This role requires you to be pro-active in communicating with customers and internal teams about industry trends, technology development and creating thought leadership.
About Us
Ashnik is a leading enterprise open-source solutions company in Southeast Asia and India, enabling organizations to adopt open source for their digital transformation goals. Founded in 2009, it offers a full-fledged Open-Source Marketplace, Solutions, and Services – Consulting, Managed, Technical, Training. Over 200 leading enterprises so far have leveraged Ashnik’s offerings in the space of Database platforms, DevOps & Microservices, Kubernetes, Cloud, and Analytics.
As a team culture, Ashnik is a family for its team members. Each member brings in a different perspective, new ideas and diverse background. Yet we all together strive for one goal – to deliver the best solutions to our customers using open-source software. We passionately believe in the power of collaboration. Through an open platform of idea exchange, we create a vibrant environment for growth and excellence.
Package : upto 20L
Experience: 8 yrs
Similar jobs
The Key Responsibilities Include But Not Limited to:
Help identify and drive Speed, Performance, Scalability, and Reliability related optimization based on experience and learnings from the production incidents.
Work in an agile DevSecOps environment in creating, maintaining, monitoring, and automation of the overall solution-deployment.
Understand and explain the effect of product architecture decisions on systems.
Identify issues and/or opportunities for improvements that are common across multiple services/teams.
This role will require weekend deployments
Skills and Qualifications:
1. 3+ years of experience in a DevOps end-to-end development process with heavy focus on service monitoring and site reliability engineering work.
2. Advanced knowledge of programming/scripting languages (Bash, PERL, Python, Node.js).
3. Experience in Agile/SCRUM enterprise-scale software development including working with GiT, JIRA, Confluence, etc.
4. Advance experience with core microservice technology (RESTFul development).
5. Working knowledge of using Advance AI/ML tools are pluses.
6. Working knowledge in the one or more of the Cloud Services: Amazon AWS, Microsoft Azure
7. Bachelors or Master’s degree in Computer Science or equivalent related field experience
Key Behaviours / Attitudes:
Professional curiosity and a desire to a develop deep understanding of services and technologies.
Experience building & running systems to drive high availability, performance and operational improvements
Excellent written & oral communication skills; to ask pertinent questions, and to assess/aggregate/report the responses.
Ability to quickly grasp and analyze complex and rapidly changing systemsSoft skills
1. Self-motivated and self-managing.
2. Excellent communication / follow-up / time management skills.
3. Ability to fulfill role/duties independently within defined policies and procedures.
4. Ability to balance multi-task and multiple priorities while maintaining a high level of customer satisfaction is key.
5. Be able to work in an interrupt-driven environment.Work with Dori Ai world class technology to develop, implement, and support Dori's global infrastructure.
As a member of the IT organization, assist with the analyze of existing complex programs and formulate logic for new complex internal systems. Prepare flowcharting, perform coding, and test/debug programs. Develop conversion and system implementation plans. Recommend changes to development, maintenance, and system standards.
Leading contributor individually and as a team member, providing direction and mentoring to others. Work is non-routine and very complex, involving the application of advanced technical/business skills in a specialized area. BS or equivalent experience in programming on enterprise or department servers or systems.
Who We Are
Grip is building a new category of investment options for the new-age of Indian Investors. Millennials
don’t communicate, shop, pay, entertain and work like the previous generation - then why should they
invest the same way?
Started in June’20, Grip has seen 20% month-on-month growth to become one of India’s fastest-
growing destinations for alternative investments. Today, Grip offers multiple investment options providing
8-16% annual returns and 1-60 month tenures as the country’s only multi- asset (lease financing,
inventory financing, corporate bonds start-up equity, commercial real estate ) investment platform. With a
minimum investment size of INR 20,000, Grip is democratizing investment options that have previously
only been available to the largest funds and family offices.
Finance and technology (FinTech) is what we do, but people are at the core of our mission. From client-
facing roles to technology, and everywhere in between, you’ll work alongside a diverse team who loves
to solve problems, think creatively, and fly the plane as we continue to build it.
In the News
- Money Control : Click Here
- Hindu Business : Click Here
What We Can Offer You
● Young and fast-growing company with a healthy work-life balance
● Great culture based on the following core values
○ Courage
○ Transparency
○ Ownership
○ Customer Obsession
○ Celebration
● Lean structure and no micromanaging. You get to own your work
● The Company has turned two so you get a seat on rocketship that’s just taking off!
● High focus on Learning & Development and monetary support for relevant upskilling
● Competitive compensation along with equity ownership for wealth creation
What You’ll Do
● Design cloud infrastructure that is secure, scalable, and highly available onAWS
● Work collaboratively with software engineering to define infrastructure and
deploying requirements
● Provision, configure and maintain AWS cloud infrastructure defined as code
● Ensure configuration and compliance with configuration management tools
● Administer and troubleshoot Linux based systems
● Troubleshoot problems across a wide array of services and functional areas
● Build and maintain operational tools for deployment, monitoring, and analysis of AWS
infrastructure and systems
● Perform infrastructure cost analysis and optimization
Your Superpowers
● At least 3-5 years of experience building and maintaining AWS infrastructure
(VPC,EC2, Security Groups, IAM, ECS, CodeDeploy, CloudFront,
S3,RDS,Elasticbeanstalk)
● Strong understanding of how to secure AWS environments and meet
compliance requirements
● Expertise using Chef/Cloudformation/Ansible for configuration management
● Hands-on experience deploying and managing infrastructure with Terraform
● Solid foundation of networking and Linux administration
● Experience with Docker, GitHub, Kubernetes,, ELK and deploying applications onAWS
● Ability to learn/use a wide variety of open source technologies and tools
● Strong bias for action and ownership
We are looking for an excellent experienced person in the Dev-Ops field. Be a part of a vibrant, rapidly growing tech enterprise with a great working environment. As a DevOps Engineer, you will be responsible for managing and building upon the infrastructure that supports our data intelligence platform. You'll also be involved in building tools and establishing processes to empower developers to
deploy and release their code seamlessly.
Responsibilities
The ideal DevOps Engineers possess a solid understanding of system internals and distributed systems.
Understanding accessibility and security compliance (Depending on the specific project)
User authentication and authorization between multiple systems,
servers, and environments
Integration of multiple data sources and databases into one system
Understanding fundamental design principles behind a scalable
application
Configuration management tools (Ansible/Chef/Puppet), Cloud
Service Providers (AWS/DigitalOcean), Docker+Kubernetes ecosystem is a plus.
Should be able to make key decisions for our infrastructure,
networking and security.
Manipulation of shell scripts during migration and DB connection.
Monitor Production Server Health of different parameters (CPU Load, Physical Memory, Swap Memory and Setup Monitoring tool to
Monitor Production Servers Health, Nagios
Created Alerts and configured monitoring of specified metrics to
manage their cloud infrastructure efficiently.
Setup/Managing VPC, Subnets; make connection between different zones; blocking suspicious ip/subnet via ACL.
Creating/Managing AMI/Snapshots/Volumes, Upgrade/downgrade
AWS resources (CPU, Memory, EBS)
The candidate would be Responsible for managing microservices at scale maintain the compute and storage infrastructure for various product teams.
Strong Knowledge about Configuration Management Tools like –
Ansible, Chef, Puppet
Extensively worked with Change tracking tools like JIRA and log
Analysis, Maintaining documents of production server error log's
reports.
Experienced in Troubleshooting, Backup, and Recovery
Excellent Knowledge of Cloud Service Providers like – AWS, Digital
Ocean
Good Knowledge about Docker, Kubernetes eco-system.
Proficient understanding of code versioning tools, such as Git
Must have experience working in an automated environment.
Good knowledge of Amazon Web Service Architects like – Amazon EC2, Amazon S3 (Amazon Glacier), Amazon VPC, Amazon Cloud Watch.
Scheduling jobs using crontab, Create SWAP Memory
Proficient Knowledge about Access Management (IAM)
Must have expertise in Maven, Jenkins, Chef, SVN, GitHub, Tomcat, Linux, etc.
Candidate Should have good knowledge about GCP.
EducationalQualifications
B-Tech-IT/M-Tech -/MBA- IT/ BCA /MCA or any degree in the relevant field
EXPERIENCE: 2-6 yr
- Extensive experience in designing & supporting Azure Managed Services Operations.
- Maintaining the Azure Active Directory and Azure AD authentication.
- Azure update management – Handling updates/Patching.
- Good understanding of Azure services (Azure App Service, Azure SQL, Azure Storage Account..etc).
- Understanding of load balancers, DNS, virtual networks, NSG and firewalls in cloud environment.
- ARM templates writing, setup automation for resources provisioning.
- Knowledge on Azure automation and Automation Desire State Configuration.
- Good understanding of High Availability and Auto scaling.
- Azure Backups and ASR (Azure Site Recovery)
- Azure Monitoring and Configuration monitoring (performance metrics, OMS)
- Cloud Migration Experience(On premise to Cloud).
- PowerShell scripting for custom tasks automation.
- Strong experience in configuring, maintaining, and troubleshooting Microsoft based production systems.
Certification:
Azure Administrator (AZ-103) & Azure Architect (AZ-300 & AZ-301)
- Demonstrated experience with AWS
- Knowledge of servers, networks, storage, client-server systems, and firewalls
- Strong expertise in Windows and/or Linux operating systems, including system architecture and design, as well as experience supporting and troubleshooting stability and performance issues
- Thorough understanding of and experience with virtualization technologies (e.g., VMWare/Hyper-V)
- Knowledge of core network services such as DHCP, DNS, IP routing, VLANs, layer 2/3 routing, and load balancing is required
- Experience in reading, writing or modifying PowerShell, Bash scripts & Python code.Experience using git
- Working know-how of software-defined lifecycles, product packaging, and deployments
- POSTGRESSQL or Oracle database administration (Backup, Restore, Tuning, Monitoring, Management)
- At least 2 from AWS Associate Solutions Architect, DevOps, or SysOps
- At least 1 from AWS Professional Solutions Architect, DevOps
- AWS: S3, Redshift, DynamoDB, EC2, VPC, Lambda, CloudWatch etc.
- Bigdata: Databricks, Cloudera, Glue and Athena
- DevOps: Jenkins, Bitbucket
- Automation: Terraform, Cloud Formation, Python, Shell scripting Experience in automating AWS infrastructure with Terraform.
- Experience in database technologies is a plus.
- Knowledge in all aspects of DevOps (source control, continuous integration, deployments, etc.)
- Proficiency in security implementation best practices on IAM policies, KMS encryption, Secrets Management, Network Security Groups etc.
- Experience working in the SCRUM Environment
Araali Networks is seeking a highly driven DevOps engineer to help streamline the release and deployment process, while assuring high quality.
Responsibilities:
Use best practices to streamline the release process
Use IaC methods to manage cloud infrastructure
Use test automation for release qualification
Skills and Qualifications:
Hands-on experience in managing release pipelines
Good understanding of public cloud infrastructure
Working knowledge of python test frameworks like pyunit, and automation tools like Selenium, Cucumber etc
Strong analytical and debugging skills
Bachelor’s degree in Computer Science
About Araali:
Araali Networks is a SaaS based cybersecurity startup that has raised a total of $10M from well known investors like A Capital, Firebolt Ventures and SV Angels, and and through a strategic investment by a publicly traded security company.
The company is disrupting the cloud firewall market by auto-creating nano-perimeters around every cloud app. The Araali solution enables developers to focus on features and improves the security posture through simplification and automation.
The security controls are embedded at the time of DevOps. The precision of Araali controls also helps with security operations where alerts are precise and intelligently routed to the right app team, making it actionable in real-time.
Cloud native technologies - Kubernetes (EKS, GKE, AKS), AWS ECS, Helm, CircleCI, Harness, Severless platforms (AWS Fargate etc.)
Infrastructure as Code tools - Terraform, CloudFormation, Ansible
Scripting - Python, Bash
Desired Skills & Experience:
Projects/Internships with coding experience in either of Javascript, Python, Golang, Java etc.
Hands-on scripting and software development fluency in any programming language (Python, Go, Node, Ruby).
Basic understanding of Computer Science fundamentals - Networking, Web Architecture etc.
Infrastructure automation experience with knowledge of at least a few of these tools: Chef, Puppet, Ansible, CloudFormation, Terraform, Packer, Jenkins etc.
Bonus points if you have contributed to open source projects, participated in competitive coding platforms like Hackerearth, CodeForces, SPOJ etc.
You’re willing to learn various new technologies and concepts. The “cloud-native” field of software is evolving fast and you’ll need to quickly learn new technologies as required.
Communication: You like discussing a plan upfront, welcome collaboration, and are an excellent verbal and written communicator.
B.E/B.Tech/M.Tech or equivalent experience.
What is the role?
As DevOps Engineer, you are responsible to setup and maintain GIT repository, DevOps tools like Jenkins, UCD, Docker, Kubernetes, Jfrog Artifactory, Cloud monitoring tools, Cloud security.
Key Responsibilities
- Setup, configure, and maintain GIT repos, Jenkins, UCD, etc. for multi hosting cloud environments.
- Architect and maintain the server infrastructure in AWS. Build highly resilient infrastructure following industry best practices.
- Working on Docker images and maintaining Kubernetes clusters.
- Develop and maintain the automation scripts using Ansible or other available tools.
- Maintain and monitor cloud Kubernetes Clusters and patching when necessary.
- Working on Cloud security tools to keep applications secured.
- Participate in software development lifecycle, specifically infra design, execution, and debugging required to achieve successful implementation of integrated solutions within the portfolio.
- Required Technical and Professional Expertise.
What are we looking for?
- Minimum 4-6 years of experience in IT industry.
- Expertise in implementing and managing Devops CI/CD pipeline.
- Experience in DevOps automation tools. And Very well versed with DevOps Frameworks, Agile.
- Working knowledge of scripting using shell, Python, Terraform, Ansible or puppet or chef.
- Experience and good understanding in any of Cloud like AWS, Azure, Google cloud.
- Knowledge of Docker and Kubernetes is required.
- Proficient in troubleshooting skills with proven abilities in resolving complex technical issues.
- Experience with working with ticketing tools.
- Middleware technologies knowledge or database knowledge is desirable.
- Experience and well versed with Jira tool is a plus.
What can you look for?
A wholesome opportunity in a fast-paced environment will enable you to juggle between concepts yet maintain the quality of content, interact, share your ideas, and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients engage and motivate their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. However, we assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
Goodera is looking for an experienced and motivated DevOps professional to be an integral part of its core infrastructure team. As a DevOps Engineer, you must be able to troubleshoot production issues, design, implement, and deploy monitoring tools, collaborate with team members to improve the existing and develop new engineering tools, optimize company's computing architecture, design and conduct security, performance, availability and availability tests.
Responsibilities:
This is a highly accountable role and the candidate must meet the following professional expectations:
• Owning and improving the scalability and reliability of our products.
• Working directly with product engineering and infrastructure teams.
• Designing and developing various monitoring system tools.
• Accountable for developing deployment strategies and build configuration management.
• Deploying and updating system and application software.
• Ensure regular, effective communication with team members and cross-functional resources.
• Maintaining a positive and supportive work culture.
• First point of contact for handling customer (may be internal stakeholders) issues, providing guidance and recommendations to increase efficiency and reduce customer incidents.
• Develop tooling and processes to drive and improve customer experience, create playbooks.
• Eliminate manual tasks via configuration management.
• Intelligently migrate services from one AWS region to other AWS regions.
• Create, implement and maintain security policies to ensure ISO/ GDPR / SOC / PCI compliance.
• Verify infrastructure Automation meets compliance goals and is current with disaster recovery plan.
• Evangelize configuration management and automation to other product developers.
• Keep himself updated with upcoming technologies to maintain the state of the art infrastructure.
Required Candidate profile :
• 3+ years of proven experience working in a DevOps environment.
• 3+ years of proven experience working in AWS Cloud environments.
• Solid understanding of networking and security best practices.
• Experience with infrastructure-as-code frameworks such as Ansible, Terraform, Chef, Puppet, CFEngine, etc.
• Experience in scripting or programming languages (Bash, Python, PHP, Node.js, Perl, etc.)
• Experience designing and building web application environments on AWS, including services such as ECS, ECR, Foregate, Lambda, SNS / SQS, CloudFront, Code Build, Code pipeline, Configuring CloudWatch, WAF, Active Directories, Kubernetes (EKS), EC2, S3, ELB, RDS, Redshift etc.
• Hands on Experience in Docker is a big plus.
• Experience working in an Agile, fast paced, DevOps environment.
• Strong Knowledge in DB such as MongoDB / MySQL / DynamoDB / Redis / Cassandra.
• Experience with Open Source and tools such as Haproxy, Apache, Nginx and Nagios etc.
• Fluency with version control systems with a preference for Git *
• Strong Linux-based infrastructures, Linux administration
• Experience with installing and configuring application servers such as WebLogic, JBoss and Tomcat.
• Hands-on in logging, monitoring and alerting tools like ELK, Grafana, Metabase, Monit, Zbbix etc.
• A team player capable of high performance, flexibility in a dynamic working environment and the ability to lead.
d ability to rain others on technical and procedural topics.