
Position Responsibilities :
- Prime focuses on installation, testing, and configuration for Oracle, ASM, backup/recovery, security, monitoring, maintenance, logging, HA, and DR setup.
- Managing hundreds of Oracle databases using more innovative routines/tools to identify the opportunities for better security and availability
- Setup and manage Oracle OEM
- Analyze, solve, and correct issues in real time
- Bring standards across environments and products by developing SOPs.
- Optimize day-to-day operations and bring efficiencies
- Plan and implement industry best practices for the best utilization of available resources
- Perform duties in compliance with Deltek, industry, and regulatory security and compliance standards as defined
- Work with cross-functional teams to assist in application and hosting optimization
- Respond effectively to high-priority issues and incidents when needed
- Provide Technically expertise and training to internal and external teams for Oracle Administration
- Work independently and with a global team on highly specialized projects
- Provide on-call support as required
- Perform duties in compliance with Deltek, industry, and regulatory security and compliance standards as defined.
Qualifications :
- 10+ years of Oracle DBA experience, at least 4 of which are in 12c or newer with multitenancy in a virtualized environment, AWS is preferred
- In-depth knowledge and extensive hands-on experience with Oracle, Grid, ASM, OEM, Fleet Management, RMAN, Data Guard, TDE, CMU, Patching, and other critical components of Oracle
- In-depth knowledge of Pluggable Databases, Virtualize Platform, Linux OS environments, and operating system Internals
- Thorough understanding of configuration options, configurable components, concepts, and technologies
- Strong expertise and hands-on with installation, configuration, backup, and restoration of all data
- Extensive experience in troubleshooting and maintaining 100+ database instances using a variety of tools
- Experience with large-scale database management designs, best practices, and issues
- Good experience with database scripting Shell/python scripting to manage effort-consuming tasks in a more innovative way
- Excellent verbal and written communication skills
- Excellent time management and prioritization skills with an ability to escalate concerns or questions
- Data-driven solid analytical skills, ability to create reports and present decision-supporting data/patterns
- A plus is knowledge and experience in OCI (Oracle Cloud Infrastructure), Snowflake, and Rubrik.
- Individuals will be working European hours to support Denmark.
- Perform duties in compliance with Deltek, industry, and regulatory security and compliance standards as defined.
- Improves and contributes towards the team's delivery process and raises change requests appropriately. Can estimate the effort required and ensure priority & urgency are understood. Ensures appropriate monitoring is in place.
- Displays good skills in executing and creating solutions for toil reduction and efficiency. Exercises abilities in troubleshooting, critical thinking, and problem-solving.
- I can handle all incidents and lead incident response with appropriate communication. I take a systematic approach to problems and see them through to the conclusion.
- Deliver business value by improving functional/product knowledge by executing best practices and contributing ideas toward innovation.
- Ability to exercise analytical and technical thinking across multiple areas of responsibility.
- When delivering tasks, collaborating, and managing feedback, the sphere of influence influences the supervisor, peers, clients, and other teams. Decisions' impact affects a wide range of teams or areas of responsibility and facilitates the crafting and implementation of strategies.

Similar jobs

At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking 4 DevOps Support Engineer to join one of our clients' teams in India who can start until 20th of July. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
Job requirements
Key Responsibilities:
- Monitor and troubleshoot AWS and/or Azure environments to ensure optimal performance and availability.
- Respond promptly to incidents and alerts, investigating and resolving issues efficiently.
- Perform basic scripting and automation tasks to streamline cloud operations (e.g., Bash, Python).
- Communicate clearly and fluently in English with customers and internal teams.
- Collaborate closely with the Team Lead, following Standard Operating Procedures (SOPs) and escalation workflows.
- Work in a rotating shift schedule, including weekends and nights, ensuring continuous support coverage.
Shift Details:
- Engineers rotate shifts, typically working 4–5 shifts per week.
- Each engineer works about 4 to 5 shifts per week, rotating through morning, evening, and night shifts—including weekends—to cover 24/7 support evenly among the team
- Rotation ensures no single engineer is always working nights or weekends; the load is shared fairly among the team.
Qualifications:
- 2–5 years of experience in DevOps or cloud support roles.
- Strong familiarity with AWS and/or Azure cloud environments.
- Experience with CI/CD tools such as GitHub Actions or Jenkins.
- Proficiency with monitoring tools like Datadog, CloudWatch, or similar.
- Basic scripting skills in Bash, Python, or comparable languages.
- Excellent communication skills in English.
- Comfortable and willing to work in a shift-based support role, including night and weekend shifts.
- Prior experience in a shift-based support environment is preferred.
What We Offer:
- Remote work opportunity — work from anywhere in India with a stable internet connection.
- Comprehensive training program including:
- Shadowing existing processes to gain hands-on experience.
- Learning internal tools, Standard Operating Procedures (SOPs), ticketing systems, and escalation paths to ensure smooth onboarding and ongoing success.

- Experience with Infrastructure-as-Code tools(IaS) like Terraform and Cloud Formation.
- Proficiency in cloud-native technologies and architectures (Docker/ Kubernetes), Ci/CD pipelines.
- Good experience in Javascript.
- Expertise in Linux / Windows environment.
- Good Experience in Scripting languages like PowerShell / Bash/ Python.
- Proficiency in revision control and DevOps best practices like Git
Position: SDE-1 DevSecOps
Location: Pune, India
Experience Required: 0+ Years
We are looking for a DevSecOps engineer to contribute to product development, mentor team members, and devise creative solutions for customer needs. We value effective communication in person, in documentation, and in code. Ideal candidates thrive in small, collaborative teams, love making an impact, and take pride in their work with a product-focused, self-driven approach. If you're passionate about integrating security and deployment seamlessly into the development process, we want you on our team.
About FlytBase
FlytBase is a global leader in enterprise drone software automation. FlytBase platform is enabling drone-in-a-box deployments all across the globe and has the largest network of partners in 50+ countries.
The team comprises young engineers and designers from top-tier universities such as IIT-B, IIT-KGP, University of Maryland, Georgia Tech, COEP, SRM, KIIT and with deep expertise in drone technology, computer science, electronics, aerospace, and robotics.
The company is headquartered in Silicon Valley, California, USA, and has R&D offices in Pune, India. Widely recognized as a pioneer in the commercial drone ecosystem, FlytBase continues to win awards globally - FlytBase was the Global Grand Champion at the ‘NTT Data Open Innovation Contest’ held in Tokyo, Japan, and was the recipient of ‘ TiE50 Award’ at TiE Silicon Valley.
Role and Responsibilities:
- Participate in the creation and maintenance of CI/CD solutions and pipelines.
- Leverage Linux and shell scripting for automating security and system updates, and design secure architectures using AWS services (VPC, EC2, S3, IAM, EKS/Kubernetes) to enhance application deployment and management.
- Build and maintain secure Docker containers, manage orchestration using Kubernetes, and automate configuration management with tools like Ansible and Chef, ensuring compliance with security standards.
- Implement and manage infrastructure using Terraform, aligning with security and compliance requirements, and set up Dynatrace for advanced monitoring, alerting, and visualization of security metrics. Develop Terraform scripts to automate and optimize infrastructure provisioning and management tasks.
- Utilize Git for secure source code management and integrate continuous security practices into CI/CD pipelines, applying vulnerability scanning and automated security testing tools.
- Contribute to security assessments, including vulnerability and penetration testing, NIST, CIS AWS, NIS2 etc.
- Implement and oversee compliance processes for SOC II, ISO27001, and GDPR.
- Stay updated on cybersecurity trends and best practices, including knowledge of SAST and DAST tools, OWASP Top10.
- Automate routine tasks and create tools to improve team efficiency and system robustness.
- Contribute to disaster recovery plans and ensure robust backup systems are in place.
- Develop and enforce security policies and respond effectively to security incidents.
- Manage incident response protocols, including on-call rotations and strategic planning.
- Conduct post-incident reviews to prevent recurrence and refine the system reliability framework.
- Implementing Service Level Indicators (SLIs) and maintaining Service Level Objectives (SLOs) and Service Level Agreements (SLAs) to ensure high standards of service delivery and reliability.
Best suited for candidates who: (Skills/Experience)
- Up to 4 years of experience in a related field, with a strong emphasis on learning and execution.
- Background in IT or computer science.
- Familiarity with CI/CD tools, cloud platforms (AWS, Azure, or GCP), and programming languages like Python, JavaScript, or Ruby.
- Solid understanding of network layers and TCP/IP protocols.
- In-depth understanding of operating systems, networking, and cloud services.
- Strong problem-solving skills with a 'hacker' mindset.
- Knowledge of security principles, threat modeling, risk assessment, and vulnerability management is a plus.
- Relevant certifications (e.g., CISSP, GWAPT, OSCP) are a plus.
Compensation:
This role comes with an annual CTC that is market competitive and depends on the quality of your work experience, degree of professionalism, culture fit, and alignment with FlytBase’s long-term business strategy.
Perks:
- Fast-paced Startup culture
- Hacker mode environment
- Enthusiastic and approachable team
- Professional autonomy
- Company-wide sense of purpose
- Flexible work hours
- Informal dress code
Requirements
- 3+ years work experience writing clean production code
- Well versed with maintaining infrastructure as code (Terraform, Cloudformation etc). High proficiency with Terraform / Terragrunt is absolutely critical
- Experience of setting CI/CD pipelines from scratch
- Experience with AWS(EC2, ECS, RDS, Elastic Cache etc), AWS lambda, Kubernetes, Docker, ServiceMesh
- Experience with ETL pipelines, Bigdata infra
- Understanding of common security issues
Roles / Responsibilities:
- Write terraform modules for deploying different component of infrastructure in AWS like Kubernetes, RDS, Prometheus, Grafana, Static Website
- Configure networking, autoscaling. continuous deployment, security and multiple environments
- Make sure the infrastructure is SOC2, ISO 27001 and HIPAA compliant
- Automate all the steps to provide a seamless experience to developers.
Company Name: Petpooja!
Location: Ahmedabad
Designation: DevOps Engineer
Experience: Between 2 to 7 Years
Candidates from Ahmedabad will be preferred
Job Location: Ahmedabad
Job Responsibilities: - -
- Planned, implement, and maintain the software development infrastructure.
- Introduce and oversee software development automation across cloud providers like AWS and Azure
- Help develop, manage, and monitor continuous integration and delivery systems
- Collaborate with software developers, QA specialists, and other team members to ensure the timely and successful delivery of new software releases
- Contribute to software design and development, including code review and feedback
- Assist with troubleshooting and problem-solving when issues arise
- Keep up with the latest industry trends and best practices while ensuring the company meets configuration requirements
- Participate in team improvement initiatives
- Help create and maintain internal documentation using Git or other similar applications
- Provide on-call support as needed
Qualification Required:
1. You should have Experience handling various services on the AWS cloud.
2. Previous experience as a Site reliability engineer would be an advantage.
3. You will be well versed with various commands and hands-on with Linux, Ubuntu administration, and other aspects of the Software development team requirement.
4. At least 2 to 7 years of experience with managing AWS Services such as Auto Scaling, Route 53, and various other internal networks.
5. Would recommend if having an AWS Certification.
Job Description
Intuitive is the fastest growing top-tier Cloud Solutions and Services company supporting Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is looking for highly talented hands on Cloud Infrastructure Architects to help accelerate our growing Professional Services consulting Cloud & DevOps practice. This is an excellent opportunity to join Intuitive’s global world class technology teams, working with some of the best and brightest engineers while also developing your skills and furthering your career working with some of the largest customers.
Key Responsibilities and Must-have skills:
- Lead the pre-sales (25%) to post-sales (75%) efforts building Public/Hybrid Cloud solutions working collaboratively with Intuitive and client technical and business stakeholders
- Be a customer advocate with obsession for excellence delivering measurable success for Intuitive’s customers with secure, scalable, highly available cloud architecture that leverage AWS Cloud services
- Experience in analyzing customer's business and technical requirements, assessing existing environment for Cloud enablement, advising on Cloud models, technologies and risk management strategies
- Apply creative thinking/approach to determine technical solutions that further business goals and align with corporate technology strategies
- Extensive experience building Well Architected solutions in-line with AWS cloud adoption framework (DevOps/DevSecOps, Database/Data Warehouse/Data Lake, App Modernization/Containers, Security, Governance, Risk, Compliance, Cost Management and Operational Excellence)
- Experience with application discovery preferably with tools like Cloudscape, to discover application configurations , databases, filesystems, and application dependencies
- Experience with Well Architected Review, Cloud Readiness Assessments and defining migration patterns (MRA/MRP) for application migration e.g. Re-host, Re-platform, Re-architect etc
- Experience in architecting and deploying AWS Landing Zone architecture with CI/CD pipeline
- Experience on architecture, design of AWS cloud services to address scalability, performance, HA, security, availability, compliance, backup and DR, automation, alerting and monitoring and cost
- Hands-on experience in migrating applications to AWS leveraging proven tools and processes including migration, implementation, cutover and rollback plans and execution
- Hands-on experience in deploying various AWS services e.g. EC2, S3, VPC, RDS, Security Groups etc. using either manual or IaC, IaC is preferred
- Hands-on Experience in writing cloud automation scripts/code such as Ansible, Terraform, CloudFormation Template (AWS CFT) etc.
- Hands-on Experience with application build/release processes CI/CD pipelines
- Deep understanding of Agile processes (planning/stand-ups/retros etc), and interact with cross-functional teams i.e. Development, Infrastructure, Security, Performance Engineering, and QA
Additional Requirements:
- Work with Technology leadership to grow the Cloud & DevOps practice. Create cloud practice collateral
- Work directly with sales teams to improve and help them drive the sales for Cloud & DevOps practice
- Assist Sales and Marketing team in creating sales and marketing collateral
- Write whitepapers and technology blogs to be published on social media and Intuitive website
- Create case studies for projects successfully executed by Intuitive delivery team
- Conduct sales enablement sessions to coach sales team on new offerings
- Flexibility with work hours supporting customer’s requirement and collaboration with global delivery teams
- Flexibility with Travel as required for Pre-sales/Post-sales, Design workshops, War-room Migration events and customer meetings
- Strong passion for modern technology exploration and development
- Excellent written, verbal communication skills, presentation, and collaboration skills - Team leadership skills
- Experience with Multi-cloud (Azure, GCP, OCI) is a big plus
- Experience with VMware Cloud Foundation as well as Advanced Windows and Linux Engineering is a big plus
- Experience with On-prem Data Engineering (Database, Data Warehouse, Data Lake) is a big plus
As a MLOps Engineer in QuantumBlack you will:
Develop and deploy technology that enables data scientists and data engineers to build, productionize and deploy machine learning models following best practices. Work to set the standards for SWE and
DevOps practices within multi-disciplinary delivery teams
Choose and use the right cloud services, DevOps tooling and ML tooling for the team to be able to produce high-quality code that allows your team to release to production.
Build modern, scalable, and secure CI/CD pipelines to automate development and deployment
workflows used by data scientists (ML pipelines) and data engineers (Data pipelines)
Shape and support next generation technology that enables scaling ML products and platforms. Bring
expertise in cloud to enable ML use case development, including MLOps
Our Tech Stack-
We leverage AWS, Google Cloud, Azure, Databricks, Docker, Kubernetes, Argo, Airflow, Kedro, Python,
Terraform, GitHub actions, MLFlow, Node.JS, React, Typescript amongst others in our projects
Key Skills:
• Excellent hands-on expert knowledge of cloud platform infrastructure and administration
(Azure/AWS/GCP) with strong knowledge of cloud services integration, and cloud security
• Expertise setting up CI/CD processes, building and maintaining secure DevOps pipelines with at
least 2 major DevOps stacks (e.g., Azure DevOps, Gitlab, Argo)
• Experience with modern development methods and tooling: Containers (e.g., docker) and
container orchestration (K8s), CI/CD tools (e.g., Circle CI, Jenkins, GitHub actions, Azure
DevOps), version control (Git, GitHub, GitLab), orchestration/DAGs tools (e.g., Argo, Airflow,
Kubeflow)
• Hands-on coding skills Python 3 (e.g., API including automated testing frameworks and libraries
(e.g., pytest) and Infrastructure as Code (e.g., Terraform) and Kubernetes artifacts (e.g.,
deployments, operators, helm charts)
• Experience setting up at least one contemporary MLOps tooling (e.g., experiment tracking,
model governance, packaging, deployment, feature store)
• Practical knowledge delivering and maintaining production software such as APIs and cloud
infrastructure
• Knowledge of SQL (intermediate level or more preferred) and familiarity working with at least
one common RDBMS (MySQL, Postgres, SQL Server, Oracle)
● Building and managing multiple application environments on AWS using automation tools like Terraform or
Cloudformation etc.
● Deploy applications with zero downtime via automation with configuration management tools such as Ansible.
● Setting up Infrastructure monitoring tools such as Prometheus, Grafana
● Setting up centralised logging using tools such as ELK.
● Containerisation of applications/microservices.
● Ensure application availability to 99.9% with highly available infrastructure.
● Monitoring performance of applications and databases.
● Ensuring that systems are safe and secure against cyber security threats.
● Working with software developers to ensure that release cycle and deployment processes are followed.
● Evaluating existing applications and platforms, give recommendations for enhancing performance via gap analysis,
identifying the most practical alternative solutions and assisting with modifications.
Skills -
● Strong knowledge of AWS Managed Services such as EC2, RDS, ECS, ECR, S3, Cloudfront, SES, Redshift, Elastic Cache,
AMQP etc.
● Experience in handling production workloads.
● Experience with Nginx web server.
● Experience with NoSql and Sql Databases such as MongoDB, Postgresql etc.
● Experience with Containerisation of applications/micro services using Docker.
● Understanding of system administration in Linux environments.
● Strong Knowledge of Infrastructure as a Code such as Terraform, Cloudformation etc.
● Strong knowledge of configuration management tools such as Ansible, Chef etc.
● Familiarity with tools such as GitLab, Jenkins, Vercel, JIRA etc.
● Proficiency in scripting languages including Bash, Python etc.
● Full understanding of software development lifecycle best practices and agile methodology
● Strong communication and documentation skills.
● An ability to drive to goals and milestones while valuing and maintaining a strong attention to detail
● Excellent judgment, analytical thinking, and problem-solving skills
● Self-motivated individual that possesses excellent time management and organizational skills
DevOps Engineer
Notice Period: 45 days / Immediate Joining
Banyan Data Services (BDS) is a US-based Infrastructure services Company, headquartered in San Jose, California, USA. It provides full-stack managed services to support business applications and data infrastructure. We do provide the data solutions and services on bare metal, On-prem, and all Cloud platforms. Our engagement service is built on the DevOps standard practice and SRE model.
We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. we offer you an opportunity to join our rocket ship startup, run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer, that address next-gen data evolution challenges. Candidates who are willing to use their experience in areas directly related to Infrastructure Services, Software as Service, and Cloud Services and create a niche in the market.
Key Qualifications
· 4+ years of experience as a DevOps Engineer with monitoring, troubleshooting, and diagnosing infrastructure systems.
· Experience in implementation of continuous integration and deployment pipelines using Jenkins, JIRA, JFrog, etc
· Strong experience in Linux/Unix administration.
· Experience with automation/configuration management using Puppet, Chef, Ansible, Terraform, or other similar tools.
· Expertise in multiple coding and scripting languages including Shell, Python, and Perl
· Hands-on experience Exposure to modern IT infrastructure (eg. Docker swarm/Mesos/Kubernetes/Openstack)
· Exposure to any of relation database technologies MySQL/Postgres/Oracle or any No-SQL database
· Worked on open-source tools for logging, monitoring, search engine, caching, etc.
· Professional Certificates in AWS or any other cloud is preferable
· Excellent problem solving and troubleshooting skills
· Must have good written and verbal communication skills
Key Responsibilities
Ambitious individuals who can work under their own direction towards agreed targets/goals.
Must be flexible to work on the office timings to accommodate the multi-national client timings.
Will be involved in solution designing from the conceptual stages through development cycle and deployments.
Involve development operations & support internal teams
Improve infrastructure uptime, performance, resilience, reliability through automation
Willing to learn new technologies and work on research-orientated projects
Proven interpersonal skills while contributing to team effort by accomplishing related results as needed.
Scope and deliver solutions with the ability to design solutions independently based on high-level architecture.
Independent thinking, ability to work in a fast-paced environment with creativity and brainstorming
http://www.banyandata.com" target="_blank">www.banyandata.com

- Degree in Computer Science or related discipline.
- AWS Certified Solutions Architect certification required
- 5+ years of architecture, design, implementation, and support of highly complex solutions (i.e. having an architectural sense for ensuring security and compliance, availability, reliability, etc.)
- Deep technical experience in serverless AWS infrastructure
- Understanding of cloud automation and orchestration tools and techniques including git, terraform, ARM or equivalent
- Create Technical Design documents, understand technical designs and translate into the application requirements.
- Exercise independent judgment in evaluating alternative technical solutions
- Participate in code and design review process
- Write unit test cases for quality check of the deliverables
- Ability to work closely with others in a team environment as well as independently
- Proven ability to problem solve and troubleshoot
- Excellent verbal and written communication skills and the ability to interact professionally with a diverse group, executives, managers, and subject matter experts
- Excellent English communication skills are required
We are looking for a Solution Architect with at least 5 years’ experience working on the following to join our growing team:
- AWS
- Postgresql
- EC2 on AWS
- Cognito
- and most importantly Serverless
You will need a strong technical AWS background focused on architecting on serverless (eg Lambda) AWS infrastructure.

