
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.

About Deqode
About
Connect with the team
Similar jobs
Role Overview:
As a DevOps Engineer (L2), you will play a key role in designing, implementing, and optimizing infrastructure. You will take ownership of automating processes, improving system reliability, and supporting the development lifecycle.
Key Responsibilities:
- Design and manage scalable, secure, and highly available cloud infrastructure.
- Lead efforts in implementing and optimizing CI/CD pipelines.
- Automate repetitive tasks and develop robust monitoring solutions.
- Ensure the security and compliance of systems, including IAM, VPCs, and network configurations.
- Troubleshoot complex issues across development, staging, and production environments.
- Mentor and guide L1 engineers on best practices.
- Stay updated on emerging DevOps tools and technologies.
- Manage cloud resources efficiently using Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation.
Qualifications:
- Bachelor’s degree in Computer Science, IT, or a related field.
- Proven experience with CI/CD pipelines and tools like Jenkins, GitLab, or Azure DevOps.
- Advanced knowledge of cloud platforms (AWS, Azure, or GCP) with hands-on experience in deployments, migrations, and optimizations.
- Strong expertise in containerization (Docker) and orchestration tools (Kubernetes).
- Proficiency in scripting languages like Python, Bash, or PowerShell.
- Deep understanding of system security, networking, and load balancing.
- Strong analytical skills and problem-solving mindset.
- Certifications (e.g., AWS Certified Solutions Architect, Kubernetes Administrator) are a plus.
What We Offer:
- Opportunity to work with a cutting-edge tech stack in a product-first company.
- Collaborative and growth-oriented environment.
- Competitive salary and benefits.
- Freedom to innovate and contribute to impactful projects.
𝐖𝐞’𝐫𝐞 𝐇𝐢𝐫𝐢𝐧𝐠: 𝐀𝐈-𝐍𝐚𝐭𝐢𝐯𝐞 𝐃𝐞𝐯𝐒𝐞𝐜𝐎𝐩𝐬 𝐄𝐧𝐠𝐢𝐧𝐞𝐞𝐫𝐬 𝐚𝐭 𝐅𝐥𝐲𝐭𝐁𝐚𝐬𝐞
📍 Location: Pune (Onsite)
This will be one of the most challenging and rewarding roles of your career.
At FlytBase, prompting is thinking. We don’t care if you’ve memorized APIs or collected DevOps certifications.
We care how you think, how you debug under pressure, and how you solve infra problems no one else can.
We don’t hire engineers to maintain systems.
We hire infra architects to design infrastructure that runs—and evolves—on its own.
If you need step-by-step tickets or constant direction—don’t apply.
We’re looking for engineers who can design, secure, and scale real-world systems with speed, clarity, and zero excuses.
If that’s what you’ve been waiting for—read on.
𝐓𝐡𝐞 𝐌𝐢𝐬𝐬𝐢𝐨𝐧
FlytBase is building the autonomous backbone for aerial intelligence—drone fleets that operate 24/7 at industrial scale.
Our platform flies missions, detects anomalies, and delivers insights—with no human in the loop.
The infra behind this? That’s what you build.
𝐘𝐨𝐮𝐫 𝐋𝐨𝐨𝐩
• Design AI-secured CI/CD pipelines (GitHub, Docker, Terraform, SAST/DAST)
• Architect AWS infra (EC2, S3, IAM, VPCs, EKS) with Infrastructure-as-Code
• Build intelligent observability systems (Grafana, Dynatrace, LLM-based detection)
• Define fallback, rollback & recovery loops that don’t need escalation
• Enforce compliance (SOC2, ISO27001, GDPR) without killing velocity
• Own SLAs from definition → delivery → defense
• Automate what used to need a team. Then automate again.
𝐖𝐡𝐨 𝐖𝐞’𝐫𝐞 𝐋𝐨𝐨𝐤𝐢𝐧𝐠 𝐅𝐨𝐫
✅ You treat infrastructure like a product
✅ You already use AI Tools to move 5x faster
✅ You can go from “zero” to “live infra” in 48 hours
𝐁𝐨𝐧𝐮𝐬 𝐩𝐨𝐢𝐧𝐭𝐬 𝐢𝐟 𝐲𝐨𝐮’𝐯𝐞:
• Built custom DevOps bots or AI agents
• Got an OSCP or hacked together your own SOC2 framework
• Shipped production infra solo
• Open-sourced your infra tools or scripts
𝐉𝐨𝐢𝐧 𝐔𝐬
If you’re still reading—good. That’s a signal.
𝐀𝐩𝐩𝐥𝐲 𝐡𝐞𝐫𝐞👉 https://lnkd.in/gsqjaJSP
Senior DevOps Engineer
Experience: Minimum 5 years of relevant experience
Key Responsibilities:
• Hands-on experience with AWS tools and CI/CD pipelines, Redhat Linux
• Strong expertise in DevOps practices and principles
• Experience with infrastructure automation and configuration management
• Excellent problem-solving skills and attention to detail
Nice to Have:
• Redhat certification
LogiNext is looking for a technically savvy and passionate Junior DevOps Engineer to cater to the development and operations efforts in product. You will choose and deploy tools and technologies to build and support a robust and scalable infrastructure.
Knowledge in building secure, high-performing and scalable infrastructure. Experience to automate and streamline the development operations and processes. You are a master in troubleshooting and resolving issues in non-production and production environments.
Responsibilities:
Scale and optimise a variety of SQL and NoSQL databases, web servers, application frameworks, caches, and distributed messaging systems Automate the deployment and configuration of the virtualized infrastructure and the entire software stack Support several Linux servers running our SaaS platform stack on AWS, Azure, GCP Define and build processes to identify performance bottlenecks and scaling pitfalls Manage robust monitoring and alerting infrastructure Explore new tools to improve development operations
Requirements:
Bachelor’s degree in Computer Science, Information Technology or a related field 0 to 1 years of experience in designing and maintaining high volume and scalable micro-services architecture on cloud infrastructure Knowledge in Linux/Unix Administration and Python/Shell Scripting Experience working with cloud platforms like AWS (EC2, ELB, S3, Auto-scaling, VPC, Lambda), GCP, Azure Knowledge in deployment automation, Continuous Integration and Continuous Deployment (Jenkins, Maven, Puppet, Chef, GitLab) and monitoring tools like Zabbix, Cloud Watch Monitoring, Nagios Knowledge of Java Virtual Machines, Apache Tomcat, Nginx, Apache Kafka, Microservices architecture, Caching mechanisms Experience in enterprise application development, maintenance and operations Knowledge of best practices and IT operations in an always-up, always-available service Excellent written and oral communication skills, judgment and decision-making skills
Key Sills Required for Lead DevOps Engineer
Containerization Technologies
Docker, Kubernetes, OpenShift
Cloud Technologies
AWS/Azure, GCP
CI/CD Pipeline Tools
Jenkins, Azure Devops
Configuration Management Tools
Ansible, Chef,
SCM Tools
Git, GitHub, Bitbucket
Monitoring Tools
New Relic, Nagios, Prometheus
Cloud Infra Automation
Terraform
Scripting Languages
Python, Shell, Groovy
· Ability to decide the Architecture for the project and tools as per the availability
· Sound knowledge required in the deployment strategies and able to define the timelines
· Team handling skills are a must
· Debugging skills are an advantage
· Good to have knowledge of Databases like Mysql, Postgresql
It is advantageous to be familiar with Kafka. RabbitMQ
· Good to have knowledge of Web servers to deploy web applications
· Good to have knowledge of Code quality checking tools like SonarQube and Vulnerability scanning
· Advantage to having experience in DevSecOps
Note: Tools mentioned in bold are a must and others are added advantage
- Strong communication skills (written and verbal)
- Responsive, reliable and results oriented with the ability to execute on aggressive plans
- A background in software development, with experience of working in an agile product software development environment
- An understanding of modern deployment tools (Git, Bitbucket, Jenkins, etc.), workflow tools (Jira, Confluence) and practices (Agile (SCRUM), DevOps, etc.)
- Expert level experience with AWS tools, technologies and APIs associated with it - IAM, Cloud-Formation, Cloud Watch, AMIs, SNS, EC2, EBS, EFS, S3, RDS, VPC, ELB, IAM, Route 53, Security Groups, Lambda, VPC etc.
- Hands on experience with Kubernetes (EKS preferred)
- Strong DevOps skills across CI/CD and configuration management using Jenkins, Ansible, Terraform, Docker.
- Experience provisioning and spinning up AWS Clusters using Terraform, Helm, Helm Charts
- Ability to work across multiple projects simultaneously
- Ability to manage and work with teams and customers across the globe
We are looking for a full-time remote DevOps Engineer who has worked with CI/CD automation, big data pipelines and Cloud Infrastructure, to solve complex technical challenges at scale that will reshape the healthcare industry for generations. You will get the opportunity to be involved in the latest tech in big data engineering, novel machine learning pipelines and highly scalable backend development. The successful candidates will be working in a team of highly skilled and experienced developers, data scientists and CTO.
Job Requirements
- Experience deploying, automating, maintaining, and improving complex services and pipelines • Strong understanding of DevOps tools/process/methodologies
- Experience with AWS Cloud Formation and AWS CLI is essential
- The ability to work to project deadlines efficiently and with minimum guidance
- A positive attitude and enjoys working within a global distributed team
Skills
- Highly proficient working with CI/CD and automating infrastructure provisioning
- Deep understanding of AWS Cloud platform and hands on experience setting up and maintaining with large scale implementations
- Experience with JavaScript/TypeScript, Node, Python and Bash/Shell Scripting
- Hands on experience with Docker and container orchestration
- Experience setting up and maintaining big data pipelines, Serverless stacks and containers infrastructure
- An interest in healthcare and medical sectors
- Technical degree with 4 plus years’ infrastructure and automation experience

• Develop and maintain CI/CD tools to build and deploy scalable web and responsive applications in production environment
• Design and implement monitoring solutions that identify both system bottlenecks and production issues
• Design and implement workflows for continuous integration, including provisioning, deployment, testing, and version control of the software.
• Develop self-service solutions for the engineering team in order to deliver sites/software with great speed and quality
o Automating Infra creation
o Provide easy to use solutions to engineering team
• Conduct research, tests, and implements new metrics collection systems that can be reused and applied as engineering best practices
o Update our processes and design new processes as needed.
o Establish DevOps Engineer team best practices.
o Stay current with industry trends and source new ways for our business to improve.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
• Manage timely resolution of all critical and/or complex problems
• Maintain, monitor, and establish best practices for containerized environments.
• Mentor new DevOps engineers
What you will bring
• The desire to work in fast-paced environment.
• 5+ years’ experience building, maintaining, and deploying production infrastructures in AWS or other cloud providers
• Containerization experience with applications deployed on Docker and Kubernetes
• Understanding of NoSQL and Relational Database with respect to deployment and horizontal scalability
• Demonstrated knowledge of Distributed and Scalable systems Experience with maintaining and deployment of critical infrastructure components through Infrastructure-as-Code and configuration management tooling across multiple environments (Ansible, Terraform etc)
• Strong knowledge of DevOps and CI/CD pipeline (GitHub, BitBucket, Artifactory etc)
• Strong understanding of cloud and infrastructure components (server, storage, network, data, and applications) to deliver end-to-end cloud Infrastructure architectures and designs and recommendations
o AWS services like S3, CloudFront, Kubernetes, RDS, Data Warehouses to come up with architecture/suggestions for new use cases.
• Test our system integrity, implemented designs, application developments and other processes related to infrastructure, making improvements as needed
Good to have
• Experience with code quality tools, static or dynamic code analysis and compliance and undertaking and resolving issues identified from vulnerability and compliance scans of our infrastructure
• Good knowledge of REST/SOAP/JSON web service API implementation
•
2. Extensive expertise in the below in AWS Development.
3. Amazon Dynamo Db, Amazon RDS , Amazon APIs. AWS Elastic Beanstalk, and AWS Cloud Formation.
4. Lambda, Kinesis. CodeCommit ,CodePipeline.
5. Leveraging AWS SDKs to interact with AWS services from the application.
6. Writing code that optimizes performance of AWS services used by the application.
7. Developing with Restful API interfaces.
8. Code-level application security (IAM roles, credentials, encryption, etc.).
9. Programming Language Python or .NET. Programming with AWS APIs.
10. General troubleshooting and debugging.

