11+ Amazon RDS Jobs in Hyderabad | Amazon RDS Job openings in Hyderabad
Apply to 11+ Amazon RDS Jobs in Hyderabad on CutShort.io. Explore the latest Amazon RDS Job opportunities across top companies like Google, Amazon & Adobe.
Interfaces with other processes and/or business functions to ensure they can leverage the
benefits provided by the AWS Platform process
Responsible for managing the configuration of all IaaS assets across the platforms
Hands-on python experience
Manages the entire AWS platform(Python, Flask, RESTAPI, serverless framework) and
recommend those that best meet the organization's requirements
Has a good understanding of the various AWS services, particularly: S3, Athena, Python code,
Glue, Lambda, Cloud Formation, and other AWS serverless resources.
AWS Certification is Plus
Knowledge of best practices for IT operations in an always-on, always-available service model
Responsible for the execution of the process controls, ensuring that staff comply with process
and data standards
Qualifications
Bachelor’s degree in Computer Science, Business Information Systems or relevant experience and
accomplishments
3 to 6 years of experience in the IT field
AWS Python developer
AWS, Serverless/Lambda, Middleware.
Strong AWS skills including Data Pipeline, S3, RDS, Redshift with familiarity with other components
like - Lambda, Glue, Step functions, CloudWatch
Must have created REST API with AWS Lambda.
Python relevant exp 3 years
Good to have Experience working on projects and problem solving with large scale multivendor
teams.
Good to have knowledge on Agile Development
Good knowledge on SDLC.
Hands on AWS Databases, (RDS, etc)
Good to have Unit testing exp.
Good to have CICD working knowledge.
Decent communication, as there will be client interaction and documentation.
Education (degree): Bachelor’s degree in Computer Science, Business Information Systems or relevant
experience and accomplishments
Years of Experience: 3-6 years
Technical Skills
Linux/Unix system administration
Continuous Integration/Continuous Delivery tools like Jenkins
Cloud provisioning and management – Azure, AWS, GCP
Ansible, Chef, or Puppet
Python, PowerShell & BASH
Job Details
JOB TITLE/JOB CODE: AWS Python Develop[er, III-Sr. Analyst
RC: TBD
PREFERRED LOCATION: HYDERABAD, IND
POSITION REPORTS TO: Manager USI T&I Cloud Managed Platform
CAREER LEVEL: 3
Work Location:
Hyderabad
Job Description:
As a Backend Developer, you will:
- Implement server-side logic to ensure high performance and responsiveness to requests from the front-end.
- Integrate machine learning models for fraud detection, enhancing the security and reliability of our applications.
- Manage database operations, ensuring the integrity, security, and efficiency of data storage and retrieval.
- Collaborate with cross-functional teams to develop and maintain scalable, robust, and secure applications.
Responsibilities:
- Development of all server-side logic, definition, and maintenance of the central database.
- Ensuring high performance and responsiveness to front-end requests.
- Integrating data storage solutions, including databases, key-value stores, blob stores, etc.
- Implementing security and data protection measures.
- Integrating machine learning models for advanced data processing and analysis.
Key Performance Indicators (KPI) For Role:
- Quality and efficiency of backend systems developed.
- Effectiveness in integrating and deploying machine learning models.
- Database performance and security measures.
- Positive feedback from team members and stakeholders.
- Adherence to coding standards and best practices in backend development.
Prior Experience Required:
- Minimum 3+ years of backend development experience.
- Proficient in Node.js or Python (especially Django and Flask frameworks), or/and Go.
- Strong database management skills with both SQL and NoSQL databases.
- Experience in integrating and deploying machine learning models for real-world applications, specifically for fraud detection is highly desirable.
- Familiarity with RESTful API development and microservices architecture.
- Good understanding of asynchronous programming and its workarounds.
- Experience with cloud services (AWS, Azure, GCP) and serverless architectures.
Location:
Hyderabad
Collaboration:
The role involves close collaboration with frontend developers, data scientists, and project managers to ensure the seamless integration of backend services with front-end applications and data analytics models.
Salary:
Competitive, based on experience and market standards.
Education:
- Bachelor's degree in Computer Science, Information Technology, or a related field.
Language Skills:
- Strong command of Business English, both verbal and written, is required.
Other Skills Required:
- Strong analytical and problem-solving skills.
- Proficient understanding of code versioning tools, such as Git.
- Ability to design and implement low-latency, high-availability, and performant applications.
- Knowledge of security compliance and integrated security protocols.
- Familiarity with continuous integration and continuous deployment (CI/CD) pipelines.
- Experience with containerization technologies (Docker, Kubernetes) is a plus.
Other Requirements:
- Proven ability to work in a fast-paced, agile development environment.
- Demonstrated ability to manage multiple projects simultaneously and meet deadlines.
- A portfolio showcasing successful backend projects.
What you'll do:
· Perform complex application programming activities with an emphasis on mobile development: Node.js, TypeScript, JavaScript, RESTful APIs and related backend frameworks
· Assist in the definition of system architecture and detailed solution design that are scalable and extensible
· Collaborate with Product Owners, Designers, and other engineers on different permutations to find the best solution possible
· Own the quality of code and do your own testing. Write unit test and improve test coverage.
· Deliver amazing solutions to production that knock everyone’s socks off
· Mentor junior developers on the team
What we’re looking for:
· Amazing technical instincts. You know how to evaluate and choose the right technology and approach for the job. You have stories you could share about what problem you thought you were solving at first, but through testing and iteration, came to solve a much bigger and better problem that resulted in positive outcomes all-around.
· A love for learning. Technology is continually evolving around us, and you want to keep up to date to ensure we are using the right tech at the right time.
· A love for working in ambiguity—and making sense of it. You can take in a lot of disparate information and find common themes, recommend clear paths forward and iterate along the way. You don’t form an opinion and sell it as if it’s gospel; this is all about being flexible, agile, dependable, and responsive in the face of many moving parts.
· Confidence, not ego. You have an ability to collaborate with others and see all sides of the coin to come to the best solution for everyone.
· Flexible and willing to accept change in priorities, as necessary
· Demonstrable passion for technology (e.g., personal projects, open-source involvement)
· Enthusiastic embrace of DevOps culture and collaborative software engineering
· Ability and desire to work in a dynamic, fast paced, and agile team environment
· Enthusiasm for cloud computing platforms such as AWS or Azure
Basic Qualifications:
· Minimum B.S. / M.S. Computer Science or related discipline from accredited college or University
· At least 4 years of experience designing, developing, and delivering backend applications with Node.js, TypeScript
· At least 2 years of experience building internet facing services
· At least 2 years of experience with AWS and/or OpenShift
· Exposure to some of the following concepts: object-oriented programming, software engineering techniques, quality engineering, parallel programming, databases, etc.
· Experience integrating APIs with front-end and/or mobile-specific frameworks
· Proficiency in building and consuming RESTful APIs
· Ability to manage multiple tasks and consistently meet established timelines
· Strong collaboration skills
· Excellent written and verbal communications skills
Preferred Qualifications:
· Experience with Apache Cordova framework
- Demonstrable knowledge of native coding background in iOS, Android
· Experience developing and deploying applications within Kubernetes based containers
Experience in Agile and SCRUM development techniques

at Altimetrik
Java with cloud
|
Core Java, SpringBoot, MicroServices |
|
- DB2 or any RDBMS database application development |
|
- Linux OS, shell scripting, Batch Processing |
|
- Troubleshooting Large Scale application |
|
- Experience in automation and unit test framework is a must |
|
- AWS Cloud experience desirable |
|
- Agile Development Experience |
|
- Complete Development Cycle ( Dev, QA, UAT, Staging) |
|
- Good Oral and Written Communication Skills |
Development using .net core
Requirement understanding and getting on client calls
Work closely with the nearshore developer
Perform code reviews and unit testing as planned
Participate in peer reviews.
Must have
Strong knowledge on C#, .net core 5 and Entity Framework
Knowledge on PostgreSQL
Web API, building Microsoft .NET-based web or Enterprise applications
Experience in building and consuming Asp.NET MVC & Web API or REST API using
jQuery, JSON, AJAX, Asp.net Web Services
Good to have
Knowledge on Docker and Kubernetes
Knowledge on AutoMapper
o Strong Python development skills, with 7+ yrs. experience with SQL.
o A bachelor or master’s degree in Computer Science or related areas
o8+ years of experience in data integration and pipeline development
o Experience in Implementing Databricks Delta lake and data lake
o Expertise designing and implementing data pipelines using modern data engineering approach and tools: SQL, Python, Delta Lake, Databricks, Snowflake Spark
o Experience in working with multiple file formats (Parque, Avro, Delta Lake) & API
o experience with AWS Cloud on data integration with S3.
o Hands on Development experience with Python and/or Scala.
o Experience with SQL and NoSQL databases.
o Experience in using data modeling techniques and tools (focused on Dimensional design)
o Experience with micro-service architecture using Docker and Kubernetes
o Have experience working with one or more of the public cloud providers i.e. AWS, Azure or GCP
o Experience in effectively presenting and summarizing complex data to diverse audiences through visualizations and other means
o Excellent verbal and written communications skills and strong leadership capabilities
Skills:
Python
• Design, implement, and extend core platform services and APIs to enable new
products and features to be built.
• Provide technical contribute to the core team that powers our backend services for
millions of concurrent users.
• Build and own the core systems that form the architecture of our backend services
from api gateways, service observability and inter-service communications to higher
level business components like identity, therapeutic engine, and analytics systems
just to name a few.
• Drive the qualitative aspects of the backend services, like performance, scalability,
observability, reliability and security and so on.
Be Part Of Building The Future
Dremio is the Data Lake Engine company. Our mission is to reshape the world of analytics to deliver on the promise of data with a fundamentally new architecture, purpose-built for the exploding trend towards cloud data lake storage such as AWS S3 and Microsoft ADLS. We dramatically reduce and even eliminate the need for the complex and expensive workarounds that have been in use for decades, such as data warehouses (whether on-premise or cloud-native), structural data prep, ETL, cubes, and extracts. We do this by enabling lightning-fast queries directly against data lake storage, combined with full self-service for data users and full governance and control for IT. The results for enterprises are extremely compelling: 100X faster time to insight; 10X greater efficiency; zero data copies; and game-changing simplicity. And equally compelling is the market opportunity for Dremio, as we are well on our way to disrupting a $25BN+ market.
About the Role
The Dremio India team owns the DataLake Engine along with Cloud Infrastructure and services that power it. With focus on next generation data analytics supporting modern table formats like Iceberg, Deltalake, and open source initiatives such as Apache Arrow, Project Nessie and hybrid-cloud infrastructure, this team provides various opportunities to learn, deliver, and grow in career. We are looking for innovative minds with experience in leading and building high quality distributed systems at massive scale and solving complex problems.
Responsibilities & ownership
- Lead, build, deliver and ensure customer success of next-generation features related to scalability, reliability, robustness, usability, security, and performance of the product.
- Work on distributed systems for data processing with efficient protocols and communication, locking and consensus, schedulers, resource management, low latency access to distributed storage, auto scaling, and self healing.
- Understand and reason about concurrency and parallelization to deliver scalability and performance in a multithreaded and distributed environment.
- Lead the team to solve complex and unknown problems
- Solve technical problems and customer issues with technical expertise
- Design and deliver architectures that run optimally on public clouds like GCP, AWS, and Azure
- Mentor other team members for high quality and design
- Collaborate with Product Management to deliver on customer requirements and innovation
- Collaborate with Support and field teams to ensure that customers are successful with Dremio
Requirements
- B.S./M.S/Equivalent in Computer Science or a related technical field or equivalent experience
- Fluency in Java/C++ with 8+ years of experience developing production-level software
- Strong foundation in data structures, algorithms, multi-threaded and asynchronous programming models, and their use in developing distributed and scalable systems
- 5+ years experience in developing complex and scalable distributed systems and delivering, deploying, and managing microservices successfully
- Hands-on experience in query processing or optimization, distributed systems, concurrency control, data replication, code generation, networking, and storage systems
- Passion for quality, zero downtime upgrades, availability, resiliency, and uptime of the platform
- Passion for learning and delivering using latest technologies
- Ability to solve ambiguous, unexplored, and cross-team problems effectively
- Hands on experience of working projects on AWS, Azure, and Google Cloud Platform
- Experience with containers and Kubernetes for orchestration and container management in private and public clouds (AWS, Azure, and Google Cloud)
- Understanding of distributed file systems such as S3, ADLS, or HDFS
- Excellent communication skills and affinity for collaboration and teamwork
- Ability to work individually and collaboratively with other team members
- Ability to scope and plan solution for big problems and mentors others on the same
- Interested and motivated to be part of a fast-moving startup with a fun and accomplished team
Role- Full time
Experience Level- 8 to 13 Years
Job Location- Hyderabad
Key Responsibilities :
Serves as a technical point of contact within the organization by:
Influencing the product requirements, behaviour and design (Automation Platform)
Driving early adoption of technology, features and best practices around product development
Lead development at all layers GUI, Backend ( DevOps Tools API integration) & DB
Work with a team of developers and testers in a highly agile environment to produce high-quality software.
Design and developing house tools. Also, expected to demonstrate new ideas through prototypes/Proof of Concepts.
Evaluate and Assess newer technologies/architecture for product development
Keeping up to date with emerging technologies/tools in DevOps Space and developments trends to assess the impact of the projects.
Must have:
Should possess Bachelors/Masters/ PhD in computer science with a minimum of 8+ years of experience
Should possess a minimum of 3 years of experience in Products/Tools Development
Should possess expertise in using various DevOps tools libraries and API's (Jenkins/JIRA/AWX/Nexus/GitHub/BitBucket/ SonarQube)
Experience in designing and developing products, tools or test automation frameworks using Java or Python technologies.
Should have a strong understanding of OOPs, SDLC (Agile Safe standards), STLC
Proficient in Python, with a good knowledge of its ecosystems (IDEs and Frameworks)
Familiar with designing and developing applications using AngularJS, HTML5, Bootstrap, NodeJS, MongoDB, etc.
Experience in implementing, consuming and testing Web services Rest APIs would be an added advantage.
Experience working as a Full-Stack developer would be an added advantage
Regards,
Talent Acquisition Team




