
Minimum 5-10 years of experience in database development or a related field. Proven experience with database design, development, and management. Experience working with large-scale databases and complex data environments.
Experience with data modelling and database design. Knowledge of database performance tuning and optimization. Architect, Develop and maintain tables, views, procedures, functions and packages in Database
Performing complex relational databases queries using SQL (AWS RDS for PostgreSQL) and Oracle PLSQL
Familiarity with ETL processes and tools (AWS Batch, AWS Glue etc)
Familiarity with CI/CD Pipelines, Jenkins Deployment, Git Repository
Perform in performance tuning. Proactively monitor the database systems to ensure secure services with minimum downtime and improve maintenance of the databases to include rollouts, patching, and upgrades. Experience with Aurora's scaling and replication capabilities.Proficiency with AWS CloudWatch for monitoring database performance and setting up alerts. Experience with performance tuning and optimization in AWS environments
Experience using Confluence for documentation and collaboration. Proficiency in using SmartDraw for creating database diagrams, flowcharts, and other visual representations of data models and processes.Proficiency in using libraries such as Pandas and NumPy for data manipulation, analysis, and transformation. Experience with libraries like SQLAlchemy and PyODBC for connecting to and interacting with various databases.
Python programming language
Agile/Scrum , Communication (Spoken English, clarity of thought)
Big Data, Data mining, machine learning and natural language processing

Similar jobs
Job Title : Senior DevOps Engineer
Location : Remote
Experience Level : 5+ Years
Role Overview :
We are a funded AI startup seeking a Senior DevOps Engineer to design, implement, and maintain a secure, scalable, and efficient infrastructure. In this role, you will focus on automating operations, optimizing deployment processes, and enabling engineering teams to deliver high-quality products seamlessly.
Key Responsibilities:
Infrastructure Scalability & Reliability :
- Architect and manage cloud infrastructure on AWS, GCP, or Azure for high availability, reliability, and cost-efficiency.
- Implement container orchestration using Kubernetes or Docker Compose.
- Utilize Infrastructure as Code (IaC) tools like Pulumi or Terraform to manage and configure infrastructure.
Deployment Automation :
- Design and maintain CI/CD pipelines using GitHub Actions, Jenkins, or similar tools.
- Implement deployment strategies such as canary or blue-green deployments, and create rollback mechanisms to ensure seamless updates.
Monitoring & Observability :
- Leverage tools like OpenTelemetry, Grafana, and Datadog to monitor system health and performance.
- Establish centralized logging systems and create real-time dashboards for actionable insights.
Security & Compliance :
- Securely manage secrets using tools like HashiCorp Vault or Doppler.
- Conduct static code analysis with tools such as SonarQube or Snyk to ensure compliance with security standards.
Collaboration & Team Enablement :
- Mentor and guide team members on DevOps best practices and workflows.
- Document infrastructure setups, incident runbooks, and troubleshooting workflows to enhance team efficiency.
Required Skills :
- Expertise in managing cloud platforms like AWS, GCP, or Azure.
- In-depth knowledge of Kubernetes, Docker, and IaC tools like Terraform or Pulumi.
- Advanced scripting capabilities in Python or Bash.
- Proficiency in CI/CD tools such as GitHub Actions, Jenkins, or similar.
- Experience with observability tools like Grafana, OpenTelemetry, and Datadog.
- Strong troubleshooting skills for debugging production systems and optimizing performance.
Preferred Qualifications :
- Experience in scaling AI or ML-based applications.
- Familiarity with distributed systems and microservices architecture.
- Understanding of agile methodologies and DevSecOps practices.
- Certifications in AWS, Azure, or Kubernetes.
What We Offer :
- Opportunity to work in a fast-paced AI startup environment.
- Flexible remote work culture.
- Competitive salary and equity options.
- Professional growth through challenging projects and learning opportunities.
We are looking to fill the role of AWS devops engineer . To join our growing team, please review the list of responsibilities and qualifications.
Responsibilities:
- Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS )
- Balance hardware, network, and software layers to arrive at a scalable and maintainable solution that meets requirements for uptime, performance, and functionality
- Monitor server applications and use tools and log files to troubleshoot and resolve problems
- Maintain 99.99% availability of the web and integration services
- Anticipate, identify, mitigate, and resolve issues relating to client facing infrastructure
- Monitor, analyse, and predict trends for system performance, capacity, efficiency, and reliability and recommend enhancements in order to better meet client SLAs and standards
- Research and recommend innovative and automated approaches for system administration and DevOps tasks
- Deploy and decommission client environments for multi and single tenant hosted applications following and updating as needed established processes and procedures
- Follow and develop CPA change control processes for modifications to systems and associated components
Practice configuration management, including maintenance of component inventory and related documentation per company policies and procedures.
Qualifications :
- Git/GitHub version control tools
- Linux and/or Windows Virtualisation (VMWare, Xen, KVM, Virtual Box )
- Cloud computing (AWS, Google App Engine, Rackspace Cloud)
- Application Servers, servlet containers and web servers (WebSphere, Tomcat)
- Bachelors / Masters Degree - 2+ years experience in software development
- Must have experience with AWS VPC networking and security
Job Title: DevOps SDE llI
Job Summary
Porter seeks an experienced cloud and DevOps engineer to join our infrastructure platform team. This team is responsible for the organization's cloud platform, CI/CD, and observability infrastructure. As part of this team, you will be responsible for providing a scalable, developer-friendly cloud environment by participating in the design, creation, and implementation of automated processes and architectures to achieve our vision of an ideal cloud platform.
Responsibilities and Duties
In this role, you will
- Own and operate our application stack and AWS infrastructure to orchestrate and manage our applications.
- Support our application teams using AWS by provisioning new infrastructure and contributing to the maintenance and enhancement of existing infrastructure.
- Build out and improve our observability infrastructure.
- Set up automated auditing processes and improve our applications' security posture.
- Participate in troubleshooting infrastructure issues and preparing root cause analysis reports.
- Develop and maintain our internal tooling and automation to manage the lifecycle of our applications, from provisioning to deployment, zero-downtime and canary updates, service discovery, container orchestration, and general operational health.
- Continuously improve our build pipelines, automated deployments, and automated testing.
- Propose, participate in, and document proof of concept projects to improve our infrastructure, security, and observability.
Qualifications and Skills
Hard requirements for this role:
- 5+ years of experience as a DevOps / Infrastructure engineer on AWS.
- Experience with git, CI / CD, and Docker. (We use GitHub, GitHub actions, Jenkins, ECS and Kubernetes).
- Experience in working with infrastructure as code (Terraform/CloudFormation).
- Linux and networking administration experience.
- Strong Linux Shell scripting experience.
- Experience with one programming language and cloud provider SDKs. (Python + boto3 is preferred)
- Experience with configuration management tools like Ansible and Packer.
- Experience with container orchestration tools. (Kubernetes/ECS).
- Database administration experience and the ability to write intermediate-level SQL queries. (We use Postgres)
- AWS SysOps administrator + Developer certification or equivalent knowledge
Good to have:
- Experience working with ELK stack.
- Experience supporting JVM applications.
- Experience working with APM tools is good to have. (We use datadog)
- Experience working in a XaaC environment. (Packer, Ansible/Chef, Terraform/Cloudformation, Helm/Kustomise, Open policy agent/Sentinel)
- Experience working with security tools. (AWS Security Hub/Inspector/GuardDuty)
- Experience with JIRA/Jira help desk.
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
We are looking for a DevOps Engineer for managing the interchange of data between the server and the users. Your primary responsibility will be the development of all server-side logic, definition, and maintenance of the central database, and ensuring high performance and responsiveness to request from the frontend. You will also be responsible for integrating the front-end elements built by your co-workers into the application. Therefore, a basic understanding of frontend technologies is necessary as well.
What we are looking for
- Must have strong knowledge of Kubernetes and Helm3
- Should have previous experience in Dockerizing the applications.
- Should be able to automate manual tasks using Shell or Python
- Should have good working knowledge on AWS and GCP clouds
- Should have previous experience working on Bitbucket, Github, or any other VCS.
- Must be able to write Jenkins Pipelines and have working knowledge on GitOps and ArgoCD.
- Have hands-on experience in Proactive monitoring using tools like NewRelic, Prometheus, Grafana, Fluentbit, etc.
- Should have a good understanding of ELK Stack.
- Exposure on Jira, confluence, and Sprints.
What you will do:
- Mentor junior Devops engineers and improve the team’s bar
- Primary owner of tech best practices, tech processes, DevOps initiatives, and timelines
- Oversight of all server environments, from Dev through Production.
- Responsible for the automation and configuration management
- Provides stable environments for quality delivery
- Assist with day-to-day issue management.
- Take lead in containerising microservices
- Develop deployment strategies that allow DevOps engineers to successfully deploy code in any environment.
- Enables the automation of CI/CD
- Implement dashboard to monitors various
- 1-3 years of experience in DevOps
- Experience in setting up front end best practices
- Working in high growth startups
- Ownership and Be Proactive.
- Mentorship & upskilling mindset.
- systems and applications
what you’ll get- Health Benefits
- Innovation-driven culture
- Smart and fun team to work with
- Friends for life
Srijan Technologies is hiring for the DevOps Lead position- Cloud Team with a permanent WFH option.
Immediate Joiners or candidates with 30 days notice period are preferred.
Requirements:-
- Minimum 4-6 Years experience in DevOps Release Engineering.
- Expert-level knowledge of Git.
- Must have great command over Kubernetes
- Certified Kubernetes Administrator
- Expert-level knowledge of Shell Scripting & Jenkins so as to maintain continuous integration/deployment infrastructure.
- Expert level of knowledge in Docker.
- Expert level of Knowledge in configuration management and provisioning toolchain; At least one of Ansible / Chef / Puppet.
- Basic level of web development experience and setup: Apache, Nginx, MySQL
- Basic level of familiarity with Agile/Scrum process and JIRA.
- Expert level of Knowledge in AWS Cloud Services.
Our client is a call management solutions company, which helps small to mid-sized businesses use its virtual call center to manage customer calls and queries. It is an AI and cloud-based call operating facility that is affordable as well as feature-optimized. The advanced features offered like call recording, IVR, toll-free numbers, call tracking, etc are based on automation and enhances the call handling quality and process, for each client as per their requirements. They service over 6,000 business clients including large accounts like Flipkart and Uber.
- Beng involved in Configuration Management, Web Services Architectures, DevOps Implementation, Build & Release Management, Database management, Backups, and Monitoring.
- Creating and managing CI/ CD pipelines for microservice architectures.
- Creating and managing application configuration.
- Researching and planning architectures and tools for smooth deployments.
- Logging, metrics and alerting management.
What you need to have:
- Proficient in Linux Commands line and troubleshooting.
- Proficient in designing CI/ CD pipelines using jenkins. Experience in deployment using Ansible.
- Experience in microservices architecture deployment, Hands-on experience on Docker, Kubernetes, EKS.
- Knowledge of infrastructure management tools (Infrastructure as cloud) such as terraform, AWS cloudformation etc.
- Proficient in AWS Services. Deployment, Monitoring and troubleshooting applications in AWS.
- Configuration management tools like ansible/chef/puppet.
- Proficient in deployment of applications behind load balancers and proxy servers such as nginx, apache.
- Proficient in bash scripting, python scripting is an advantage.
- Experience with Logging, Monitoring, and Alerting tools like ELK(Elastic-search, Logstash, Kibana), Nagios. Graylog, splunk Prometheus, Grafana is a plus.
- Proficient in Configuration Management.

ABOUT US: TURGAJO TECHNOLOGIES PVT TLD (http://www.turgajo.com">www.turgajo.com )
We are a product-based company, on a mission to capitalize on the evolution of new technologies and the new opportunities they present. We develop cutting-edge software solutions for the service industry and are working on some exciting projects that we believe have the power to fundamentally change and enhance the systems used by businesses every day.
ROLE AND RESPONSIBILITIES
At least 3+ years of experience as a system administrator.
- Knowledge of server technologies (cloud and on-prem).
- Work with both development and QA teams with a focus on automating builds & deployments.
- Knowledge of a range of current information technologies and computing platforms, such as cloud computing, Monitoring systems (Azure, AWS, VMw.are, commvault), Automation Tools (Azure/AWS Cloud, SCCM).
- Good Knowledge of Cloud-Based Software like Azure and AWS.
- Proficiency to manage the CI/CD process cycle includes the complete knowledge of Jenkins/TFS and Github.
- Ensure team collaboration using Jira, Confluence, and other tools.
- Build environments for unit tests, integration tests, system tests, and acceptance tests.
- Experience with virtualization (Azure, AWS, VMware).
- Experience and hands-on experience in Azure and AWS platforms.
- Azure and AWS monitoring and log analysis.
- Azure and AWS server and network security.
- Cloud-native engineer.
- Automate infrastructure provisioning and configuration (PowerShell scripting, etc.).
- Fixing skills, problem-solving, and resolution (root cause identification).
- VM’s deployment.
- Azure and AWS Site Recovery.
- Excellent Communication Skill is Mandatory.
QUALIFICATIONS AND EDUCATION REQUIREMENTS
- Bachelor's degree in computer science or related field required
- 3 years of experience in hands-on DevOps engineering
- Working experience with C#
- Maintain and administer Jenkins systems
- Expert in using Azure and AWS toolkits
- Good understanding of cloud application design patterns and practices preferably AWS
- Ability to quickly learn new technologies
- Strong problem-solving skills
- Excellent oral and written English communication skills
- Experience implementing CI/CD
- Experience using a wide variety of open source technologies and cloud services
- Experience with infrastructure automation solutions (Ansible, Chef, Puppet, etc.)
- Experience with Azure and AWS
- Defined and implemented the configuration for high throughput and scale with capacity planning load balancing strategies.
PREFERRED SKILLS
Experience with Jenkins or any other CI/CD tool
Strong experience in automating CI/CD & application deployment using various tools like GitLab, Jenkins, and AWS.
Please email your resume at hraturgajodotcom
Should be open to embracing new technologies, keeping up with emerging tech.
Strong troubleshooting and problem-solving skills.
Willing to be part of a high-performance team, build mature products.
Should be able to take ownership and work under minimal supervision.
Strong Linux System Administration background (with minimum 2 years experience), responsible for handling/defining the organization infrastructure(Hybrid).
Working knowledge of MySQL databases, Nginx, and Haproxy Load Balancer.
Experience in CI/CD pipelines, Configuration Management (Ansible/Saltstack) & Cloud Technologies (AWS/Azure/GCP)
Hands-on experience in GitHub, Jenkins, Prometheus, Grafana, Nagios, and Open Sources tools.
Strong Shell & Python scripting would be a plus.

