Only apply on this link - https://loginext.hire.trakstar.com/jobs/fk025uh?source=" target="_blank">https://loginext.hire.trakstar.com/jobs/fk025uh?source=
LogiNext is looking for a technically savvy and passionate Associate Vice President - Product Engineering - DevOps or Senior Database Administrator to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust infrastructure.
You have hands-on experience in building secure, high-performing and scalable infrastructure. You have experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in dev, staging and production environments.
Responsibilities:
- Design and implement scalable infrastructure for delivering and running web, mobile and big data applications on cloud
- Scale and optimise a variety of SQL and NoSQL databases (especially MongoDB), web servers, application frameworks, caches, and distributed messaging systems
- Automate the deployment and configuration of the virtualized infrastructure and the entire software stack
- Plan, implement and maintain robust backup and restoration policies ensuring low RTO and RPO
- Support several Linux servers running our SaaS platform stack on AWS, Azure, IBM Cloud, Ali Cloud
- Define and build processes to identify performance bottlenecks and scaling pitfalls
- Manage robust monitoring and alerting infrastructure
- Explore new tools to improve development operations to automate daily tasks
- Ensure High Availability and Auto-failover with minimum or no manual interventions
Requirements:
- Bachelor’s degree in Computer Science, Information Technology or a related field
- 11 to 14 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure
- Strong background in Linux/Unix Administration and Python/Shell Scripting
- Extensive experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure
- Experience in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios
- Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms
- Experience in query analysis, peformance tuning, database redesigning,
- Experience in enterprise application development, maintenance and operations
- Knowledge of best practices and IT operations in an always-up, always-available service
- Excellent written and oral communication skills, judgment and decision-making skills.
- Excellent leadership skill.
About LogiNext
About
LogiNext is amongst the fastest growing tech company, providing solutions to simplify and automate the ecosphere of logistics and supply chain management. Our aim is to organize the daunting process of logistics and supply chain planning, with an array of SaaS driven by the most robust enterprise solutions globally.
Our clientele is spread across the globe and we empower them to optimize their supply chain operations by unique data capturing, advanced analytics and visualization. From inception, LogiNext has been an industry leader and recipient of awards like NetApp's Innovative Tech Company of the year, Entrepreneur's Logistics Firm of the Year, Aegis's innovation in Big Data, CIO Choice Award for best supply chain logistics cloud solutions, etc.
Backed by influential industry leaders like PayTM and Indian Angel Network and with partners like IBM, Microsoft, Google, AWS and Samsung, LogiNext has achieved exponential success in a very short span of time and is set to exceed 300% growth by the end of 2016. The true growth hackers, who paved way for this success are the people working exceptionally hard and adding value to our organisation. Our brand ambassadors - that's how we address our people, bring unique values, discipline and problem-solving skills to nurture the innovative and entrepreneurial work culture at LogiNext. Passion, versatility, expertise and a hunger for success is the Mantra chanted by every Logi-Nexter!
Company video
Connect with the team
Similar jobs
- At least 5 year of experience in Cloud technologies-AWS and Azure and developing.
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills
You will be responsible for:
- Managing all DevOps and infrastructure for Sizzle
- We have both cloud and on-premise servers
- Work closely with all AI and backend engineers on processing requirements and managing both development and production requirements
- Optimize the pipeline to ensure ultra fast processing
- Work closely with management team on infrastructure upgrades
You should have the following qualities:
- 3+ years of experience in DevOps, and CI/CD
- Deep experience in: Gitlab, Gitops, Ansible, Docker, Grafana, Prometheus
- Strong background in Linux system administration
- Deep expertise with AI/ML pipeline processing, especially with GPU processing. This doesn’t need to include model training, data gathering, etc. We’re looking more for experience on model deployment, and inferencing tasks at scale
- Deep expertise in Python including multiprocessing / multithreaded applications
- Performance profiling including memory, CPU, GPU profiling
- Error handling and building robust scripts that will be expected to run for weeks to months at a time
- Deploying to production servers and monitoring and maintaining the scripts
- DB integration including pymongo and sqlalchemy (we have MongoDB and PostgreSQL databases on our backend)
- Expertise in Docker-based virtualization including - creating & maintaining custom Docker images, deployment of Docker images on cloud and on-premise services, monitoring of production Docker images with robust error handling
- Expertise in AWS infrastructure, networking, availability
Optional but beneficial to have:
- Experience with running Nvidia GPU / CUDA-based tasks
- Experience with image processing in python (e.g. openCV, Pillow, etc)
- Experience with PostgreSQL and MongoDB (Or SQL familiarity)
- Excited about working in a fast-changing startup environment
- Willingness to learn rapidly on the job, try different things, and deliver results
- Bachelors or Masters degree in computer science or related field
- Ideally a gamer or someone interested in watching gaming content online
Skills:
DevOps, Ansible, CI/CD, GitLab, GitOps, Docker, Python, AWS, GCP, Grafana, Prometheus, python, sqlalchemy, Linux / Ubuntu system administration
Seniority: We are looking for a mid to senior level engineer
Salary: Will be commensurate with experience.
Who Should Apply:
If you have the right experience, regardless of your seniority, please apply.
Work Experience: 3 years to 6 years
DevOps Engineer
1. Should have at least 5+ years of experience
2. Should have working experience in Docker, Microservices Architecture Application Deployment, GitHub Container Registry, GitHub Actions, Load Balancer, Nginx Web server,
3. Should have working expertise in CI/CD tool
4. Should have working experience with the bash script
5. Good to have at least one cloud platform services knowledge
DevOps Engineer (Automation)
ABOUT US
Established in 2009, Ashnik is a leading open-source solutions and consulting company in South East Asia and India, headquartered in Singapore. We enable digital transformation for large enterprises through our design, architecting, and solution skills. Over 100 large enterprises in the region have acknowledged our expertise in delivering solutions using key open-source technologies. Our offerings form critical part of Digital transformation, Big Data platform, Cloud and Web acceleration and IT modernization. We represent EDB, Pentaho, Docker, Couchbase, MongoDB, Elastic, NGINX, Sysdig, Redis Labs, Confluent, and HashiCorp as their key partners in the region. Our team members bring decades of experience in delivering confidence to enterprises in adopting open source software and are known for their thought leadership.
LOCATION: Mumbai
THE POSITION
Ashnik is looking for talented and passionate Technical consultant to be part of the training team and work with customers on DevOps Solution. You will be responsible for implementation and consultations related work for customers across SEA and India. We are looking for personnel with personal qualities like -
- Passion for working for different customers and different environments.
- Excellent communication and articulation skills
- Aptitude for learning new technology and willingness to understand technologies which he/she is not directly working on.
- Willingness to travel within and outside the country.
- Ability to independently work at the customer site and navigate through different teams.
RESPONSIBILITIES
First 2 months:
- Get an in-depth understanding of Containers, Kubernetes, CI/CD, IaC.
- Get hands-on experience with various technologies of Mirantis Kubernetes Engine, Terraform, Vault, Sysdig
After 2 months: The ideal candidate can will ensure the following outcomes from every client deployment:
- Utilize various open source technologies and tools to orchestrate solutions.
- Write scripts and automation using Perl/Python/Groovy/Java/Bash
- Build independent tools and solutions to effectively scale the infrastructure.
- Will be involved in automation using CI/CD/DevOps concepts.
- Be able to document procedures for building and deploying.
- Build independent tools and solutions to effectively scale the infrastructure.
- Work on a cloud-based infrastructure spanning Amazon Web Services and Microsoft Azure
- Work with pre-sales and sales team to help customers during their evaluation of Terraform, Vault and other open source technologies.
- Conduct workshops for customers as needed for technical hand-holding and technical handover.
SKILLS AND EXPERIENCE
- Graduate/Post Graduate in any technology.
- Hands on experience in Terraform, AWS Cloud Formation, Ansible, Jenkins, Docker, Git, Jira etc
- Hands on experience at least in one scripting language like Perl/Python/Groovy/Bash
- Knowledge of Java/JVM based languages.
- Experience in Jenkins maintenance and scalability, designing and implementing advanced automation pipelines with Jenkins.
- Experience with a repository manager like JFrog Artifactory
- Strong background in git, Github/Bitbucket, and code branching/merging strategies
- Ability to understand and make trade-offs among different DevOps tools.
ADDITIONAL SKILLS
- Experience with Kubernetes, AWS, Google Cloud and/or Azure is a strong plus
- Some experience with secrets/key management preferably with HashiCorp Vault
- Experience using monitoring solutions, i.e. Datadog, Prometheus, ELK Stack, NewRelic, Nagios etc
Package: 25 lakhs- 30 lakhs
Company Name: Petpooja!
Location: Ahmedabad
Designation: DevOps Engineer
Experience: Between 2 to 7 Years
Candidates from Ahmedabad will be preferred
Job Location: Ahmedabad
Job Responsibilities: - -
- Planned, implement, and maintain the software development infrastructure.
- Introduce and oversee software development automation across cloud providers like AWS and Azure
- Help develop, manage, and monitor continuous integration and delivery systems
- Collaborate with software developers, QA specialists, and other team members to ensure the timely and successful delivery of new software releases
- Contribute to software design and development, including code review and feedback
- Assist with troubleshooting and problem-solving when issues arise
- Keep up with the latest industry trends and best practices while ensuring the company meets configuration requirements
- Participate in team improvement initiatives
- Help create and maintain internal documentation using Git or other similar applications
- Provide on-call support as needed
Qualification Required:
1. You should have Experience handling various services on the AWS cloud.
2. Previous experience as a Site reliability engineer would be an advantage.
3. You will be well versed with various commands and hands-on with Linux, Ubuntu administration, and other aspects of the Software development team requirement.
4. At least 2 to 7 years of experience with managing AWS Services such as Auto Scaling, Route 53, and various other internal networks.
5. Would recommend if having an AWS Certification.
-
4+ years of experience in IT and infrastructure
-
2+ years of experience in Azure Devops
-
Experience with Azure DevOps using both as CI / CD tool and Agile framework
-
Practical experience building and maintaining automated operational infrastructure
-
Experience in building React or Angular applications, .NET is must.
-
Practical experience using version control systems with Azure Repo
-
Developed and maintained scripts using Power Shell, ARM templates/ Terraform scripts for Infrastructure as a Code.
-
Experience in Linux shell scripting (Ubuntu) is must
-
Hands on experience with release automation, configuration and debugging.
-
Should have good knowledge of branching and merging
-
Integration of tools like static code analysis tools like SonarCube and Snky or static code analyser tools is a must.
We are looking for a Senior Platform Engineer responsible for handling our GCP/AWS clouds. The
candidate will be responsible for automating the deployment of cloud infrastructure and services to
support application development and hosting (architecting, engineering, deploying, and operationally
managing the underlying logical and physical cloud computing infrastructure).
Location: Bangalore
Reporting Manager: VP, Engineering
Job Description:
● Collaborate with teams to build and deliver solutions implementing serverless,
microservice-based, IaaS, PaaS, and containerized architectures in GCP/AWS environments.
● Responsible for deploying highly complex, distributed transaction processing systems.
● Work on continuous improvement of the products through innovation and learning. Someone with
a knack for benchmarking and optimization
● Hiring, developing, and cultivating a high and reliable cloud support team
● Building and operating complex CI/CD pipelines at scale
● Work with GCP Services, Private Service Connect, Cloud Run, Cloud Functions, Pub/Sub, Cloud
Storage, Networking in general
● Collaborate with Product Management and Product Engineering teams to drive excellence in
Google Cloud products and features.
● Ensures efficient data storage and processing functions in accordance with company security
policies and best practices in cloud security.
● Ensuring scaled database setup/montioring with near zero downtime
Key Skills:
● Hands-on software development experience in Python, NodeJS, or Java
● 5+ years of Linux/Unix Administration monitoring, reliability, and security of Linux-based, online,
high-traffic services and Web/eCommerce properties
● 5+ years of production experience in large-scale cloud-based Infrastructure (GCP preferred)
● Strong experience with Log Analysis and Monitoring tools such as CloudWatch, Splunk,Dynatrace, Nagios, etc.
● Hands-on experience with AWS Cloud – EC2, S3 Buckets, RDS
● Hands-on experience with Infrastructure as a Code (e.g., cloud formation, ARM, Terraform,Ansible, Chef, Puppet) and Version control tools
● Hands-on experience with configuration management (Chef/Ansible)
● Experience in designing High Availability infrastructure and planning for Disaster Recovery solutions
Regards
Team Merito
Implementing various development, testing, automation tools, and IT infrastructure
Planning the team structure, activities, and involvement in project management activities.
Managing stakeholders and external interfaces
Setting up tools and required infrastructure
Defining and setting development, test, release, update, and support processes for DevOps operation
Have the technical skill to review, verify, and validate the software code developed in the project.
Troubleshooting techniques and fixing the code bugs
Monitoring the processes during the entire lifecycle for its adherence and updating or creating new processes for improvement and minimizing the wastage
Encouraging and building automated processes wherever possible
Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management
Incidence management and root cause analysis
Coordination and communication within the team and with customers
Selecting and deploying appropriate CI/CD tools
Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline)
Mentoring and guiding the team members
Monitoring and measuring customer experience and KPIs
Managing periodic reporting on the progress to the management and the customer
- Experience in implementing DevOps practices and DevOps-tools in areas like CI/CD using Jenkins environment automation, and release automation, virtualization, infra as a code or metrics tracking.
- Hands on experience in DevOps tools configuration in different environments.
- Strong knowledge of working with DevOps design patterns, processes and best practices
- Hand-on experience in Setting up Build pipelines.
- Prior working experience in system administration or architecture in Windows or Linux.
- Must have experience in GIT (BitBucket, GitHub, GitLab)
- Hands-on experience on Jenkins pipeline scripting.
- Hands-on knowledge in one scripting language (Nant, Perl, Python, Shell or PowerShell)
- Configuration level skills in tools like SonarQube (or similar tools) and Artifactory.
- Expertise on Virtual Infrastructure (VMWare or VirtualBox or QEMU or KVM or Vagrant) and environment automation/provisioning using SaltStack/Ansible/Puppet/Chef
- Deploying, automating, maintaining and managing Azure cloud based production systems including monitoring capacity.
- Good to have experience in migrating code repositories from one source control to another.
- Hands-on experience in Docker container and orchestration based deployments like Kubernetes, Service Fabric, Docker swarm.
- Must have good communication skills and problem solving skills