11+ ArangoDB Jobs in India
Apply to 11+ ArangoDB Jobs on CutShort.io. Find your next job, effortlessly. Browse ArangoDB Jobs and apply today!
Main tasks
- Supervision of the CI/CD process for the automated builds and deployments of web services and web applications as well as desktop tool in the cloud and container environment
- Responsibility of the operations part of a DevOps organization especially for development in the environment of container technology and orchestration, e.g. with Kubernetes
- Installation, operation and monitoring of web applications in cloud data centers for the purpose of development of the test as well as for the operation of an own productive cloud
- Implementation of installations of the solution especially in the container context
- Introduction, maintenance and improvement of installation solutions for development in the desktop and server environment as well as in the cloud and with on-premise Kubernetes
- Maintenance of the system installation documentation and implementation of trainings
Execution of internal software tests and support of involved teams and stakeholders
- Hands on Experience with Azure DevOps.
Qualification profile
- Bachelor’s or master’s degree in communications engineering, electrical engineering, physics or comparable qualification
- Experience in software
- Installation and administration of Linux and Windows systems including network and firewalling aspects
- Experience with build and deployment automation with tools like Jenkins, Gradle, Argo, AnangoDB or similar as well as system scripting (Bash, Power-Shell, etc.)
- Interest in operation and monitoring of applications in virtualized and containerized environments in cloud and on-premise
- Server environments, especially application, web-and database servers
- Knowledge in VMware/K3D/Rancer is an advantage
- Good spoken and written knowledge of English
DevOps & Automation:
- Experience in CI/CD tools like Azure DevOps, YAML, Git, and GitHub. Capable of automating build, test, and deployment processes to streamline application delivery.
- Hands-on experience with Infrastructure as Code (IaC) tools such as Bicep (preferred), Terraform, Ansible, and ARM Templates.
Cloud Services & Architecture:
- Experience in Azure Cloud services, including Web Apps, AKS, Application Gateway, APIM, and Logic Apps.
- Good understanding of cloud design patterns, security best practices, and cost optimization strategies.
Scripting & Automation:
- Experience in developing and maintaining automation scripts using PowerShell to manage, monitor, and support applications.
- Familiar with Azure CLI, REST APIs, and automating workflows using Azure DevOps Pipelines.
Data Integration & ADF:
- Working knowledge or basic hands-on experience with Azure Data Factory (ADF), focusing on developing and managing data pipelines and workflows.
- Knowledge of data integration practices, including ETL/ELT processes and data transformations.
Application Management & Monitoring:
- Ability to provide comprehensive support for both new and legacy applications.
- Proficient in managing and monitoring application performance using tools like Azure Monitor, Log Analytics, and Application Insights.
- Understanding of application security principles and best practices.
Database Skills:
- Basic experience of SQL and Azure SQL, including database backups, restores, and application data management.
True to its name, is on a mission to unlock $100+ billion of trapped working capital in the economy by creating India’s largest marketplace for invoice discounting to solve the day-to-day. problems faced by businesses. Founded by ex-BCG and ISB / IIM alumni, and backed by SAIF Partners, CashFlo helps democratize access to credit in a fair and transparent manner. Awarded Supply Chain Finance solution of the year in 2019, CashFlo creates a win-win ecosystem for Buyers, suppliers
and financiers through its unique platform model. CashFlo shares its parentage with HCS Ltd., a 25 year old, highly reputed financial services company has raised over Rs. 15,000 Crores in the market till date,
for over 200 corporate clients.
Our leadership team consists of ex-BCG, ISB / IIM alumni with a team of industry veterans from financial services serving as the advisory board. We bring to the table deep insights in the SME lending
space, based on 100+ years of combined experience in Financial Services. We are a team of passionate problem solvers, and are looking for like-minded people to join our team.
The challenge
Solve a complex $300+ billion problem at the cutting edge of Fintech innovation, and make a tangible difference to the small business landscape in India.Find innovative solutions for problems in a yet to be discovered market.
Key Responsibilities
As an early team member, you will get a chance to set the foundations of our engineering culture. You will help articulate our engineering
principles and help set the long-term roadmap. Making decisions on the evolution of CashFlo's technical architectureBuilding new features end to end, from talking to customers to writing code.
Our Ideal Candidate Will Have
3+ years of full-time DevOps engineering experience
Hands-on experience working with AWS services
Deep understanding of virtualization and orchestration tools like Docker, ECS
Experience in writing Infrastructure as Code using tools like CDK, Cloud formation, Terragrunt or Terraform
Experience using centralized logging & monitoring tools such as ELK, CloudWatch, DataDog
Built monitoring dashboards using Prometheus, Grafana
Built and maintained code pipelines and CI/CD
Thorough knowledge of SDLC
Been part of teams that have maintained large deployments
About You
Product-minded. You have a sense for great user experience and feel for when something is off. You love understanding customer pain points
and solving for them.
Get a lot done. You enjoy all aspects of building a product and are comfortable moving across the stack when necessary. You problem solve
independently and enjoy figuring stuff out.
High conviction. When you commit to something, you're in all the way. You're opinionated, but you know when to disagree and commit.
Mediocrity is the worst of all possible outcomes.
Whats in it for me
Gain exposure to the Fintech space - one of the largest and fastest growing markets in India and globally
Shape India’s B2B Payments landscape through cutting edge technology innovation
Be directly responsible for driving company’s success
Join a high performance, dynamic and collaborative work environment that throws new challenges on a daily basis
Fast track your career with trainings, mentoring, growth opportunities on both IC and management track
Work-life balance and fun team events
We are a boutique IT services & solutions firm headquartered in the Bay Area with offices in India. Our offering includes custom-configured hybrid cloud solutions backed by our managed services. We combine best in class DevOps and IT infrastructure management practices, to manage our clients Hybrid Cloud Environments.
In addition, we build and deploy our private cloud solutions using Open stack to provide our clients with a secure, cost effective and scale able Hybrid Cloud solution. We work with start-ups as well as enterprise clients.
This is an exciting opportunity for an experienced Cloud Engineer to work on exciting projects and have an opportunity to expand their knowledge working on adjacent technologies as well.
Must have skills
• Provisioning skills on IaaS cloud computing for platforms such as AWS, Azure, GCP.
• Strong working experience in AWS space with various AWS services and implementations (i.e. VPCs, SES, EC2, S3, Route 53, Cloud Front, etc.)
• Ability to design solutions based on client requirements.
• Some experience with various network LAN/WAN appliances like (Cisco routers and ASA systems, Barracuda, Meraki, SilverPeak, Palo Alto, Fortinet, etc.)
• Understanding of networked storage like (NFS / SMB / iSCSI / Storage GW / Windows Offline)
• Linux / Windows server installation, maintenance, monitoring, data backup and recovery, security, and administration.
• Good knowledge of TCP/IP protocol & internet technologies.
• Passion for innovation and problem solving, in a start-up environment.
• Good communication skills.
Good to have
• Remote Monitoring & Management.
• Familiarity with Kubernetes and Containers.
• Exposure to DevOps automation scripts & experience with tools like Git, bash scripting, PowerShell, AWS Cloud Formation, Ansible, Chef or Puppet will be a plus.
• Architect / Practitioner certification from OEM with hands-on capabilities.
What you will be working on
• Trouble shoot and handle L2/ L3 tickets.
• Design and architect Enterprise Cloud systems and services.
• Design, Build and Maintain environments primarily in AWS using EC2, S3/Storage, CloudFront, VPC, ELB, Auto Scaling, Direct Connect, Route53, Firewall, etc.
• Build and deploy in GCP/ Azure as needed.
• Architect cloud solutions keeping performance, cost and BCP considerations in mind.
• Plan cloud migration projects as needed.
• Collaborate & work as part of a cohesive team.
• Help build our private cloud offering on Open stack.
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
Designation - DevOps Engineer
Urgently required. (NP of maximum 15 days)
Location:- Mumbai
Experience:- 3-5 years.
Package Offered:- Rs.4,00,000/- to Rs.7,00,000/- pa.
DevOps Engineer Job Description:-
Responsibilities
- Deploy updates and fixes
- Provide Level 2 technical support
- Build tools to reduce occurrences of errors and improve customer & client experience
- Develop software to integrate with internal back-end systems
- Perform root cause analysis for production errors
- Investigate and resolve technical issues
- Develop scripts to automate visualization
- Design procedures for system troubleshooting and maintenance
Requirements
- Work experience as a DevOps Engineer or similar software engineering role
- Work experience on AWS
- Good knowledge of Ruby or Python
- Working knowledge of databases and SQL
- Problem-solving attitude
- Team spirit
- BSc in Computer Science, Engineering or relevant field
You will work on:
You will be working on some of our clients massive scale Infrastructure and DevOps requirements - designing for microservices and large scale data analytics. You will be working on enterprise scale problems - but will be part of our agile team that delivers like a startup. You will have opportunity to be part of team that's building and managing large private cloud.
What you will do (Responsibilities):
- Work on cloud marketplace enablements for some of our clients products
- Write Kubernetes Operators to automate custom PaaS solutions
- Participate in cloud projects to implement new technology solutions, Proof of concepts to improve cloud technology offerings.
- Work with developers to deploy to private or public cloud/on-premise services, debug and resolve issues.
- On call responsibilities to respond to emergency situations and scheduled maintenance.
- Contribute to and maintain documentation for systems, processes, procedures and infrastructure configuration
What you bring (Skills):
- Experience with administering of and debugging on Linux based systems with programming skills in Scripting, Golang, Python among others
- Expertise in Git repositories specifically on GitHub, Gitlab, Bitbucket, Gerrit
- Comfortable with DevOps for Big Data databases like Terradata, Netezza, Hadoop based ecosystems, BigQuery, RedShift among others
- Comfortable in interfacing with SQL and No-SQL databases like MySQL, Postgres, MongoDB, ElasticSearch, Redis
Great if you know (Skills):
- Understanding various build and CI/CD systems – Maven, Gradle, Jenkins, Gitlab CI, Spinnaker or Cloud based build systems
- Exposure to deploying and automating on any public cloud – GCP, Azure or AWS
- Private cloud experience – VMWare or OpenStack
- Big DataOps experience – managing infrastructure and processes for Apache Airflow, Beam, Hadoop clusters
- Containerized applications – Docker based image builds and maintainenace.
- Kubernetes applications – deploy and develop operators, helm charts, manifests among other artifacts.
Advantage Cognologix:
- Higher degree of autonomy, startup culture & small teams
- Opportunities to become expert in emerging technologies
- Remote working options for the right maturity level
- Competitive salary & family benefits
- Performance based career advancement
About Cognologix:
Cognologix helps companies disrupt by reimagining their business models and innovate like a Startup. We are at the forefront of digital disruption and take a business first approach to help meet our client’s strategic goals.
We are DevOps focused organization helping our clients focus on their core product activities by handling all aspects of their infrastructure, integration and delivery.
Benefits Working With Us:
- Health & Wellbeing
- Learn & Grow
- Evangelize
- Celebrate Achievements
- Financial Wellbeing
- Medical and Accidental cover.
- Flexible Working Hours.
- Sports Club & much more.
We are seeking a passionate DevOps Engineer to help create the next big thing in data analysis and search solutions.
You will join our Cloud infrastructure team supporting our developers . As a DevOps Engineer, you’ll be automating our environment setup and developing infrastructure as code to create a scalable, observable, fault-tolerant and secure environment. You’ll incorporate open source tools, automation, and Cloud Native solutions and will empower our developers with this knowledge.
We will pair you up with world-class talent in cloud and software engineering and provide a position and environment for continuous learning.
Engineering group to plan ongoing feature development, product maintenance.
• Familiar with Virtualization, Containers - Kubernetes, Core Networking, Cloud Native
Development, Platform as a Service – Cloud Foundry, Infrastructure as a Service, Distributed
Systems etc
• Implementing tools and processes for deployment, monitoring, alerting, automation, scalability,
and ensuring maximum availability of server infrastructure
• Should be able to manage distributed big data systems such as hadoop, storm, mongoDB,
elastic search and cassandra etc.,
• Troubleshooting multiple deployment servers, Software installation, Managing licensing etc,.
• Plan, coordinate, and implement network security measures in order to protect data, software, and
hardware.
• Monitor the performance of computer systems and networks, and to coordinate computer network
access and use.
• Design, configure and test computer hardware, networking software, and operating system
software.
• Recommend changes to improve systems and network configurations, and determine hardware or
software requirements related to such changes.