50+ Python Jobs in Hyderabad | Python Job openings in Hyderabad
Apply to 50+ Python Jobs in Hyderabad on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Immediate hiring for Senior Data Engineer
📍 Location: Hyderabad/Bangalore
💼 Experience: 7+Years
🕒 Employment Type: Full-Time
🏢 Work Mode: Hybrid
📅 Notice Period: 0-1Month serving notice only
We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.
🔎 Key Responsibilities:
- Data Pipeline Development
- Data Modeling and Architecture
- Data Integration and API Development
- Data Infrastructure Management
- Collaboration and Documentation
🎯 Required Skills:
- Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
- 7+ years of proven experience in data engineering, software development, or related technical roles.
- 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
- 7+ years of experience with database systems, data modeling, and advanced SQL.
- 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
- Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
- 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
- Strong analytical, problem-solving, and debugging skills with high attention to detail.
- Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
- Ability to adapt to rapidly evolving technologies and business requirements.

Role Overview
We are looking for a Senior Data Quality Engineer who is passionate about building reliable and scalable data platforms. In this role, you will ensure high-quality, trustworthy data across pipelines and analytics systems by designing robust data ingestion frameworks, implementing data quality checks, and optimizing data transformations.
You will work closely with data engineers, analytics teams, and product stakeholders to ensure data accuracy, consistency, and reliability across the organization.
Key Responsibilities
- Cleanse, normalize, and enhance data quality across operational systems and new data sources flowing through the data platform.
- Design, build, monitor, and maintain ETL/ELT pipelines using Python, SQL, and Airflow.
- Develop and optimize data models, tables, and transformations in Snowflake.
- Build and maintain data ingestion workflows, including API integrations, file ingestion, and database connectors.
- Ensure data reliability, integrity, and performance across pipelines.
- Perform comprehensive data profiling to understand data structures, detect anomalies, and resolve inconsistencies.
- Implement data quality validation frameworks and automated checks across pipelines.
- Use data integration and data quality tools such as Deequ, Great Expectations (GX), Splink, Fivetran, Workato, Informatica, etc., to onboard new data sources.
- Troubleshoot pipeline failures and implement data monitoring and alerting mechanisms.
- Collaborate with engineering, analytics, and product teams in an Agile development environment.
Required Technical Skills
Core Technologies
- Strong hands-on experience with SQL
- Python for data transformation and pipeline development
- Workflow orchestration using Apache Airflow
- Experience working with Snowflake data warehouse
Data Engineering Expertise
- Strong understanding of ETL / ELT pipeline design
- Data profiling and data quality validation techniques
- Experience building data ingestion pipelines from APIs, files, and databases
- Data modeling and schema design
Tools & Platforms
- Data Quality Tools: Deequ, Great Expectations (GX), Splink
- Data Integration Tools: Fivetran, Workato, Informatica
- Cloud Platforms: AWS (preferred)
- Version Control & DevOps: Git, CI/CD pipelines
Qualifications
- 5–8 years of experience in Data Quality Engineering / Data Engineering
- Strong expertise in SQL, Python, Airflow, and Snowflake
- Experience working with large-scale datasets and distributed data systems
- Solid understanding of data engineering best practices across the development lifecycle
- Experience working in Agile environments (Scrum, sprint planning, etc.)
- Strong analytical and problem-solving skills
What We Look For
- Passion for data accuracy, reliability, and governance
- Ability to identify and resolve complex data issues
- Strong collaboration skills across data, engineering, and analytics teams
- Ownership mindset and attention to data integrity and performance
Why Join Us
- Opportunity to work on modern data platforms and large-scale datasets
- Collaborate with high-performing data and engineering teams
- Exposure to cloud data architecture and modern data tools
- Competitive compensation and strong career growth opportunities

Role Overview
We are looking for a highly skilled Senior Full Stack Developer with strong expertise in modern backend technologies and scalable web application development. The ideal candidate is passionate about building high-performance applications, robust APIs, and scalable systems while collaborating with cross-functional teams to deliver impactful solutions.
This role requires a developer who can work as an individual contributor, solve complex technical challenges, and build products that create real business impact.
Key Responsibilities
- Design, develop, and maintain scalable full-stack web applications
- Build and optimize robust backend services and RESTful APIs
- Develop high-performance applications using Node.js and FastAPI
- Collaborate with product managers, designers, and engineering teams to deliver end-to-end solutions
- Ensure application performance, security, scalability, and reliability
- Write clean, maintainable, and well-tested code
- Participate in architecture discussions and code reviews
- Troubleshot complex production issues and provided effective technical solutions
- Follow modern development practices, coding standards, and CI/CD processes
Technical Skills
Core Technologies
- JavaScript – Advanced proficiency
- TypeScript – Strong hands-on experience
- Node.js – Strong backend development expertise
- Python (FastAPI) – API development and integration
Additional Skills (Good to Have)
- Experience with modern frontend frameworks such as React / Angular / Vue
- Experience with REST API design and microservices architecture
- Knowledge of cloud platforms (AWS / Azure / GCP)
- Experience with Docker, CI/CD pipelines
- Familiarity with databases such as PostgreSQL, MySQL, or MongoDB
Required Qualifications
- 5–8 years of experience in full-stack development
- Proven experience building scalable web applications and APIs
- Strong problem-solving and analytical skills
- Experience working in Agile development environments
- Ability to work independently and deliver high-quality solutions
What We Look For
- Passion for clean code and scalable architecture
- Strong ownership mindset
- Ability to solve complex technical challenges
- Excellent communication and collaboration skills
Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp
Location : Hyderabad(Onsite)
Immediate to 15 days Joiners
Experience : 5+ to 13 Years
Role Summary
We are looking for a Senior Data Engineer who will play a foundational role in:
- Client onboarding from a data perspective
- Understanding complex insurance data flows
- Designing secure, scalable ingestion pipelines
- Establishing strong data modeling and governance standards
This role sits at the intersection of technology, data architecture, security, and business onboarding.
.
Key Responsibilities
- Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
- Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
- Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
- Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
- Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
- Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
- Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
- Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
- Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
- Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
- Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards
Required Technical Skills
- Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
- Platforms: Azure, AWS, Data Bricks, Snowflake
- ETL / Orchestration: Airflow or similar frameworks
- Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
- Visualization Exposure: Power BI
- Version Control & CI/CD: GitHub, Azure Devops, or equivalent
- Integrations: APIs, real-time data streaming, ML model integration exposure
Preferred Qualifications
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
- 5+ years of experience in data engineering or similar roles
- Strong ability to align technical solutions with business objectives
- Excellent communication and stakeholder management skills
What We Offer
- Direct collaboration with the core US data leadership team
- High ownership and trust to manage the function end-to-end
- Exposure to a global environment with advanced tools and best practices
Role
We are looking for a Full Stack Engineer who can own the entire technical stack, design systems that scale, and ship products fast. You will work across frontend, backend, and AI systems, making key architectural decisions while building a product used by real users.
This role offers high ownership, where engineers move ideas to production quickly and take responsibility for both technical decisions and product impact.
What would you do?
- Build and own the end-to-end platform using React, Node.js microservices, Python AI agents, and AWS.
- Design and implement scalable system architecture, including caching, databases, and state management between AI and UI.
- Develop AI-powered backend services and orchestrate LLM workflows using modern frameworks.
- Build highly interactive front-end experiences using modern React and real-time communication tools.
- Define and maintain engineering best practices, including CI/CD pipelines, monorepo structures, and development workflows.
- Collaborate closely with users and product teams to identify problems and ship impactful solutions.
- Continuously simplify systems by removing unnecessary complexity and keeping architecture clean.
Who should apply?
- Engineers with 4+ years of experience building and shipping production-grade products.
- Strong understanding of system design, architecture, and scalable backend systems.
- Hands-on experience with Python (FastAPI, async systems) and LLM-based applications.
- Proficiency in JavaScript / TypeScript with Node.js and modern backend frameworks.
- Experience building modern frontend applications using React (React 18+).
- Familiarity with databases such as Redis, PostgreSQL, or MongoDB, and designing scalable APIs.
- Engineers comfortable working in fast-paced environments with high ownership and minimal process overhead.
Technical Skills
- Backend: Node.js, Express, Python, FastAPI
- Frontend: React (React 18+), interactive UI development
- AI/LLM Systems: LLM orchestration, multi-model integrations
- Databases: Redis, PostgreSQL, MongoDB
- Infrastructure: AWS, CI/CD pipelines, microservices architecture
- Real-time Systems: Socket.IO, Server-Sent Events (SSE)
Job Title: Senior Full-stack Developer (Python,React)
Location: Hyderabad, India (On-site Only)
Employment Type: Full-Time
Work Mode: Office-Based; Remote or Hybrid Not Allowed
Role Summary
We are looking for a skilled Senior Fullstack Developer with expertise in Django (Python),React, RESTful APIs, GraphQL, microservices architecture, Redis, and AWS services (SNS, SQS, etc.). The ideal candidate will be responsible for designing, developing, and maintaining scalable backend systems and APIs to support dynamic frontend applications and services.
Required Skillset:
l 9+ years of professional experience writing production-grade software, including experience leading the design of complex systems.
l Strong expertise in Python (Django or equivalent frameworks) and REST API development.
l Solid exp of frontend frameworks such as React and TypeScript.
l Strong understanding of relational databases (MySQL or PostgreSQL preferred).
l Experience with CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes).
l Hands-on experience with cloud infrastructure (AWS preferred)
l Proven experience debugging complex production issues and improving observability.
Preferred Skillset:
l Experience in enterprise SaaS or B2B systems with multi-tenancy, authentication (OAuth, SSO, SAML), and data partitioning. Exposure to Kafka or RabbitMQ, microservices.
l Knowledge of event-driven architecture, A/B testing frameworks, and analytics pipelines.
l Familiarity with accessibility standards and best practices Agile/Scrum methodologies.
l Exposure to the Open edX ecosystem or open-source contributions in education tech.
l Demonstrated history of technical mentorship, team leadership, or cross-team collaboration.
Tech Stack:
l Backend: Python (Django), (Celery,Redis Asynchronous workflows), REST APIs
l Frontend: React, TypeScript, SCSS
l Data: MySQL, Snowflake, Elasticsearch
l DevOps/Cloud: Docker,Kubernetes,GitHub Actions,AWS
l Monitoring: Datadog
l Collaboration Tools: GitHub, Jira, Slack, Segment
Primary Responsibilities:
l Lead, guide, and mentor a team of Python/Django engineers, offering hands-on technical support and direction.
l Architect, design, and deliver secure, scalable, and high-performing web applications.
l Manage the complete software development lifecycle including requirements gathering, system design, development, testing, deployment, and post-launch maintenance.
l Ensure compliance with coding standards, architectural patterns, and established development best practices.
l Collaborate with product teams, QA, UI/UX, and other stakeholders to ensure timely and high-quality product releases.
l Perform detailed code reviews, optimize system performance, and resolve production-level issues.
l Drive engineering improvements such as automation, CI/CD implementation, and modernization of outdated systems.
l Create and maintain technical documentation while providing regular updates to leadership and stakeholders.
Description
SRE Engineer
Role Overview
As a Site Reliability Engineer, you will play a critical role in ensuring the availability and performance of our customer-facing platform. You will work closely with DevOps, DBA, and Development teams to provision and maintain infrastructure, deploy and monitor our applications, and automate workflows. Your contributions will have a direct impact on customer satisfaction and overall experience.
Responsibilities and Deliverables
• Manage, monitor, and maintain highly available systems (Windows and Linux)
• Analyze metrics and trends to ensure rapid scalability.
• Address routine service requests while identifying ways to automate and simplify.
• Create infrastructure as code using Terraform, ARM Templates, Cloud Formation.
• Maintain data backups and disaster recovery plans.
• Design and deploy CI/CD pipelines using GitHub Actions, Octopus, Ansible, Jenkins, Azure DevOps.
• Adhere to security best practices through all stages of the software development lifecycle
• Follow and champion ITIL best practices and standards.
• Become a resource for emerging and existing cloud technologies with a focus on AWS.
Organizational Alignment
• Reports to the Senior SRE Manager
• This role involves close collaboration with DevOps, DBA, and security teams.
Technical Proficiencies
• Hands-on experience with AWS is a must-have.
• Proficiency analyzing application, IIS, system, security logs and CloudTrail events
• Practical experience with CI/CD tools such as GitHub Actions, Jenkins, Octopus
• Experience with observability tools such as New Relic, Application Insights, AppDynamics, or DataDog.
• Experience maintaining and administering Windows, Linux, and Kubernetes.
• Experience in automation using scripting languages such as Bash, PowerShell, or Python.
• Configuration management experience using Ansible, Terraform, Azure Automation Run book or similar.
• Experience with SQL Server database maintenance and administration is preferred.
• Good Understanding of networking (VNET, subnet, private link, VNET peering).
• Familiarity with cloud concepts including certificates, Oauth, AzureAD, ASE, ASP, AKS, Azure Apps,
Load Balancers, Application Gateway, Firewall, Load Balancer, API Management, SQL Server, Databases on Azure
Experience
• 7+ years of experience in SRE or System Administration role
• Demonstrated ability building and supporting high availability Windows/Linux servers, with emphasis on the WISA stack (Windows/IIS/SQL Server/ASP.net)
• 3+ years of experience working with cloud technologies including AWS, Azure.
• 1+ years of experience working with container technology including Docker and Kubernetes.
• Comfortable using Scrum, Kanban, or Lean methodologies.
Education
• Bachelor’s Degree or College Diploma in Computer Science, Information Systems, or equivalent
experience.
Additional Job Details:
• Working hours: 2:00 PM / 3:00 PM to 11:30 PM IST
• Interview process: 3 technical rounds
• Work model: 3 days’ work from office
About the company:
At Xenspire Technologies Pvt. Ltd., we are building People-First AI products—AI that augments human decision-making, reduces cognitive load, and earns trust through transparency, control, and reliability.
As a Lead Engineer in the Founding Engineering Team, youʼll help set the technical direction and build the core systems that everything else will run on—product architecture, engineering standards, and AI-first capabilities embedded into real workflows. This is a challenging environment: short feedback loops, meaningful ownership, and problems that donʼt come with a playbook.
If you like shipping fast, thinking deeply, and building systems that scale from day one, youʼll fit right in.
What Youʼll Do:
- Build and own core systems (web + backend + data) from scratch—designed to scale.
- Develop People-First AI capabilities: copilots, semantic search, automated workflows, and decision support—designed with guardrails, explainability, and human-in-the-loop controls.
- Drive architecture decisions: APIs, database design, eventing, caching, security basics, observability, and performance.
- Convert ambiguous business needs into clean product experiences with strong engineering discipline.
- Establish engineering standards: code quality, reviews, CI/CD, testing strategy, release readiness.
- Mentor engineers through example—this is a hands-on role, not a coordination role.
- Partner closely with founders/product/design; make trade-offs and ship outcomes, not just output.
What Weʼre Looking For:
- 2-4 years building production-grade software, ideally in product companies or high-growth startups.
- Strong expertise in Backend: Python/Java, APIs, scalability patterns Databases: PostgreSQL/MySQL + one NoSQL/search system (Elastic/OpenSearch/Vector DB is a plus)
- Proven experience building platforms/products from zero 1, then stabilizing for scale.
- Multi-tenant SaaS experience, RBAC, audit logs, and security-first design patterns.
- High ownership mindset; comfortable with ambiguity, tight timelines, and strong accountability.
- Strong communication—clear docs, crisp decisions, visible trade-offs.
Nice to Have:
- Practical AI experience (not just demos): LLM integrations, prompt/tooling patterns, evaluation, safety/guardrails RAG/semantic search, embeddings, vector stores, reranking, data pipelines
- Cloud familiarity: AWS/GCP/Azure, containers, basic infra-as-code, observability tooling.
- Experience shipping AI features with measurable quality (latency, accuracy, cost, adoption)
About Kanerika:
Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.
We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.
Awards and Recognitions:
Kanerika has won several awards over the years, including:
1. Best Place to Work 2023 by Great Place to Work®
2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today
3. NASSCOM Emerge 50 Award in 2014
4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture
5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.
Working for us:
Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.
Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.
Role Responsibilities:
Following are high level responsibilities that you will play but not limited to:
- Design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
- Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
- Enable business analytics and self-service reporting through Power BI and other visualization tools.
- Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
- Implement and enforce best practices for data governance, data quality, and security.
- Mentor and guide junior data engineers; establish coding and design standards.
- Evaluate emerging technologies and tools to continuously improve the data ecosystem.
Required Qualifications:
- Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
- Bachelor’s/ Master’s degree in Computer Science, Information Technology, Engineering, or related field.
- 7-10 years of experience in data engineering or data platform development
- Strong hands-on experience in SQL, Snowflake, Python, and Airflow
- Solid understanding of data modeling, data governance, security, and CI/CD practices.
Preferred Qualifications:
- Experience in leading a team
- Familiarity with data modeling techniques and practices for Power BI.
- Knowledge of Azure Databricks or other data processing frameworks.
- Knowledge of Microsoft Fabric or other Cloud Platforms.
What we need?
· B. Tech computer science or equivalent.
Why join us?
- Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
- Gain hands-on experience in content marketing with exposure to real-world projects.
- Opportunity to learn from experienced professionals and enhance your marketing skills.
- Contribute to exciting initiatives and make an impact from day one.
- Competitive stipend and potential for growth within the company.
- Recognized for excellence in data and AI solutions with industry awards and accolades.
Employee Benefits:
1. Culture:
- Open Door Policy: Encourages open communication and accessibility to management.
- Open Office Floor Plan: Fosters a collaborative and interactive work environment.
- Flexible Working Hours: Allows employees to have flexibility in their work schedules.
- Employee Referral Bonus: Rewards employees for referring qualified candidates.
- Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.
2. Inclusivity and Diversity:
- Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
- Mandatory POSH training: Promotes a safe and respectful work environment.
3. Health Insurance and Wellness Benefits:
- GMC and Term Insurance: Offers medical coverage and financial protection.
- Health Insurance: Provides coverage for medical expenses.
- Disability Insurance: Offers financial support in case of disability.
4. Child Care & Parental Leave Benefits:
- Company-sponsored family events: Creates opportunities for employees and their families to bond.
- Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
- Family Medical Leave: Offers leave for employees to take care of family members' medical needs.
5. Perks and Time-Off Benefits:
- Company-sponsored outings: Organizes recreational activities for employees.
- Gratuity: Provides a monetary benefit as a token of appreciation.
- Provident Fund: Helps employees save for retirement.
- Generous PTO: Offers more than the industry standard for paid time off.
- Paid sick days: Allows employees to take paid time off when they are unwell.
- Paid holidays: Gives employees paid time off for designated holidays.
- Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.
6. Professional Development Benefits:
- L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
- Mentorship Program: Offers guidance and support from experienced professionals.
- Job Training: Provides training to enhance job-related skills.
- Professional Certification Reimbursements: Assists employees in obtaining professional certifications.
- Promote from Within: Encourages internal growth and advancement opportunities.
We are seeking a highly skilled Qt/QML Engineer to design and develop advanced GUIs for aerospace applications. The role requires working closely with system architects, avionics software engineers, and mission systems experts to create reliable, intuitive, and real-time UI for mission-critical systems such as UAV ground control stations, and cockpit displays.
Key Responsibilities
- Design, develop, and maintain high-performance UI applications using Qt/QML (Qt Quick, QML, C++).
- Translate system requirements into responsive, interactive, and user-friendly interfaces.
- Integrate UI components with real-time data streams from avionics systems, UAVs, or mission control software.
- Collaborate with aerospace engineers to ensure compliance with DO-178C, or MIL-STD guidelines where applicable.
- Optimise application performance for low-latency visualisation in mission-critical environments.
- Implement data visualisation (raster and vector maps, telemetry, flight parameters, mission planning overlays).
- Write clean, testable, and maintainable code while adhering to aerospace software standards.
- Work with cross-functional teams (system engineers, hardware engineers, test teams) to validate UI against operational requirements.
- Support debugging, simulation, and testing activities, including hardware-in-the-loop (HIL) setups.
Required Qualifications
- Bachelor’s / Master’s degree in Computer Science, Software Engineering, or related field.
- 1-3 years of experience in developing Qt/QML-based applications (Qt Quick, QML, Qt Widgets).
- Strong proficiency in C++ (11/14/17) and object-oriented programming.
- Experience integrating UI with real-time data sources (TCP/IP, UDP, serial, CAN, DDS, etc.).
- Knowledge of multithreading, performance optimisation, and memory management.
- Familiarity with aerospace/automotive domain software practices or mission-critical systems.
- Good understanding of UX principles for operator consoles and mission planning systems.
- Strong problem-solving, debugging, and communication skills.
Desirable Skills
- Experience with GIS/Mapping libraries (OpenSceneGraph, Cesium, Marble, etc.).
- Knowledge of OpenGL, Vulkan, or 3D visualisation frameworks.
- Exposure to DO-178C or aerospace software compliance.
- Familiarity with UAV ground control software (QGroundControl, Mission Planner, etc.) or similar mission systems.
- Experience with Linux and cross-platform development (Windows/Linux).
- Scripting knowledge in Python for tooling and automation.
- Background in defence, aerospace, automotive or embedded systems domain.
What We Offer
- Opportunity to work on cutting-edge aerospace and defence technologies.
- Collaborative and innovation-driven work culture.
- Exposure to real-world avionics and mission systems.
- Growth opportunities in autonomy, AI/ML for aerospace, and avionics UI systems.
We are seeking a talented AI/ML Engineer with strong hands-on experience in Generative AI and Large Language Models (LLMs) to join our Business Intelligence team. The role involves designing, developing, and deploying advanced AI/ML and GenAI-driven solutions to unlock business insights and enhance data-driven decision-making.
Key Responsibilities:
• Collaborate with business analysts and stakeholders to identify AI/ML and Generative AI use cases.
• Design and implement ML models for predictive analytics, segmentation, anomaly detection, and forecasting.
• Develop and deploy Generative AI solutions using LLMs (GPT, LLaMA, Mistral, etc.).
• Build and maintain Retrieval-Augmented Generation (RAG) pipelines and semantic search systems.
• Work with vector databases (FAISS, Pinecone, ChromaDB) for embedding storage and retrieval.
• Develop end-to-end AI/ML pipelines from data preprocessing to deployment.
• Integrate AI/ML and GenAI solutions into BI dashboards and reporting tools.
• Optimize models for performance, scalability, and reliability.
• Maintain documentation and promote knowledge sharing within the team.
Mandatory Requirements:
• 4+ years of relevant experience as an AI/ML Engineer.
• Hands-on experience in Generative AI and Large Language Models (LLMs) – Mandatory.
• Experience implementing RAG pipelines and prompt engineering techniques.
• Strong programming skills in Python.
• Experience with ML frameworks (TensorFlow, PyTorch, scikit-learn).
• Experience with vector databases (FAISS, Pinecone, ChromaDB).
• Strong understanding of SQL and database systems.
• Experience integrating AI solutions into BI tools (Power BI, Tableau).
• Strong analytical, problem-solving, and communication skills. Good to Have
• Experience with cloud platforms (AWS, Azure, GCP).
• Experience with Docker or Kubernetes.
• Exposure to NLP, computer vision, or deep learning use cases.
• Experience in MLOps and CI/CD pipelines
Job Details
- Job Title: Lead I - Data Engineering (Python, AWS Glue, Pyspark, Terraform)
- Industry: Global digital transformation solutions provider
- Domain - Information technology (IT)
- Experience Required: 5-7 years
- Employment Type: Full Time
- Job Location: Hyderabad
- CTC Range: Best in Industry
Job Description
Data Engineer with AWS, Python, Glue, Terraform, Step function and Spark
Skills: Python, AWS Glue, Pyspark, Terraform - All are mandatory
******
Notice period - 0 to 15 days only
Job stability is mandatory
Location: Hyderabad
In this role, you'll be responsible for building machine learning based systems and conduct data analysis that improves the quality of our large geospatial data. You’ll be developing NLP models to extract information, using outlier detection to identifying anomalies and applying data science methods to quantify the quality of our data. You will take part in the development, integration, productionisation and deployment of the models at scale, which would require a good combination of data science and software development.
Responsibilities
- Development of machine learning models
- Building and maintaining software development solutions
- Provide insights by applying data science methods
- Take ownership of delivering features and improvements on time
Must-have Qualifications
- 4 year's experience
- Senior data scientist preferable with knowledge of NLP
- Strong programming skills and extensive experience with Python
- Professional experience working with LLMs, transformers and open-source models from HuggingFace
- Professional experience working with machine learning and data science, such as classification, feature engineering, clustering, anomaly detection and neural networks
- Knowledgeable in classic machine learning algorithms (SVM, Random Forest, Naive Bayes, KNN etc.).
- Experience using deep learning libraries and platforms, such as PyTorch
- Experience with frameworks such as Sklearn, Numpy, Pandas, Polars
- Excellent analytical and problem-solving skills
- Excellent oral and written communication skills
Extra Merit Qualifications
- Knowledge in at least one of the following: NLP, information retrieval, data mining
- Ability to do statistical modeling and building predictive models
- Programming skills and experience with Scala and/or Java
Role Overview
We are hiring a Principal Datacenter Backend Developer to architect and build highly scalable, reliable backend platforms for modern data centers. This role owns control-plane and data-plane services powering orchestration, monitoring, automation, and operational intelligence across large-scale on-prem, hybrid, and cloud data center environments.
This is a hands-on principal IC role with strong architectural ownership and technical leadership responsibilities.
Key Responsibilities
- Own end-to-end backend architecture for datacenter platforms (orchestration, monitoring, DCIM, automation).
- Design and build high-availability distributed systems at scale.
- Develop backend services using Java (Spring Boot / Micronaut / Quarkus) and/or Python (FastAPI / Flask / Django).
- Build microservices for resource orchestration, telemetry ingestion, capacity and asset management.
- Design REST/gRPC APIs and event-driven systems.
- Drive performance optimization, scalability, and reliability best practices.
- Embed SRE principles, observability, and security-by-design.
- Mentor senior engineers and influence technical roadmap decisions.
Required Skills
- Strong hands-on experience in Java and/or Python.
- Deep understanding of distributed systems and microservices.
- Experience with Kubernetes, Docker, CI/CD, and cloud-native deployments.
- Databases: PostgreSQL/MySQL, NoSQL, time-series data.
- Messaging systems: Kafka / Pulsar / RabbitMQ.
- Observability tools: Prometheus, Grafana, ELK/OpenSearch.
- Secure backend design (OAuth2, RBAC, audit logging).
Nice to Have
- Experience with DCIM, NMS, or infrastructure automation platforms.
- Exposure to hyperscale or colocation data centers.
- AI/ML-based monitoring or capacity planning experience.
Why Join
- Architect mission-critical platforms for large-scale data centers.
- High-impact principal role with deep technical ownership.
- Work on complex, real-world distributed systems problems.
Title:TeamLead– Software Development
(Lead ateam of developers to deliver applications in line withproduct strategy and growth)
Experience:8– 10 years
Department:InformationTechnology
Classification: Full-Time
Location:HybridinHyderabad,India (3days onsiteand2days remote)
Job Description:
Lookingforafull-time Software Development Team Lead to lead our high-performing Information
Technology team. Thisperson will play a key rolein Clarity’s business by overseeing a development
team, focusingonexisting systems and long-term growth.Thisperson will serveas the technical leader,
able to discuss data structures, new technologies, and methods of achieving system goals. This person
will be crucialin facilitating collaborationamong team members and providing mentoring.
Reporting to the Director, SoftwareDevelopment,thispersonwillberesponsible for theday-to-day
operations of their team, be the first point of escalation and technical contactfor theteam.
JobResponsibilities:
Manages all activities oftheir software developmentteamand sets goals for each team
member to ensure timely project delivery.
Performcode reviews andwrite code if needed.
Collaborateswiththe InformationTechnologydepartmentand business management
team to establish priorities for the team’s plan and manage team performance.
Provide guidance on project requirements,developer processes, andend-user
documentation.
Supports anexcellent customer experience bybeingproactive in assessing escalations
and working with the team to respond appropriately.
Uses technical expertise to contribute towards building best-in-class products. Analyzes
business needs and develops a mix of internal and externalsoftware systems that work
well together.
Using Clarity platforms, writes, reviews, and revises product requirements and
specifications. Analyzes software requirements,implements design plans, andreviews
unit tests. Participates in other areas of the software developmentprocess.
RequiredSkills:
ABachelor’s degree inComputerScience,InformationTechnology, Engineering,or a
related discipline.
Excellentwritten and verbalcommunication skills.
Experiencewith .Net Framework,WebApplications,WindowsApplications, andWeb
Services
Experience in developing andmaintaining applications using C#.NetCore,ASP.NetMVC,
and Entity Framework
Experience in building responsive front-endusingReact.js,Angular.js,HTML5, CSS3 and
JavaScript.
Experience in creating andmanaging databases, stored procedures andcomplex queries
with SQL Server
Experiencewith Azure Cloud Infrastructure
8+years of experience indesigning andcoding software inabove technology stack.
3+years ofmanaging a teamwithin adevelopmentorganization.
3+years of experience in Agile methodologies.
Preferred Skills:
Experience in Python,WordPress,PHP
Experience in using AzureDevOps
Experience working with Salesforce, orany othercomparable ticketing system
Experience in insurance/consumer benefits/file processing (EDI).
JOB DETAILS:
* Job Title: Specialist I - DevOps Engineering
* Industry: Global Digital Transformation Solutions Provider
* Salary: Best in Industry
* Experience: 7-10 years
* Location: Bengaluru (Bangalore), Chennai, Hyderabad, Kochi (Cochin), Noida, Pune, Thiruvananthapuram
Job Description
Job Summary:
As a DevOps Engineer focused on Perforce to GitHub migration, you will be responsible for executing seamless and large-scale source control migrations. You must be proficient with GitHub Enterprise and Perforce, possess strong scripting skills (Python/Shell), and have a deep understanding of version control concepts.
The ideal candidate is a self-starter, a problem-solver, and thrives on challenges while ensuring smooth transitions with minimal disruption to development workflows.
Key Responsibilities:
- Analyze and prepare Perforce repositories — clean workspaces, merge streams, and remove unnecessary files.
- Handle large files efficiently using Git Large File Storage (LFS) for files exceeding GitHub’s 100MB size limit.
- Use git-p4 fusion (Python-based tool) to clone and migrate Perforce repositories incrementally, ensuring data integrity.
- Define migration scope — determine how much history to migrate and plan the repository structure.
- Manage branch renaming and repository organization for optimized post-migration workflows.
- Collaborate with development teams to determine migration points and finalize migration strategies.
- Troubleshoot issues related to file sizes, Python compatibility, network connectivity, or permissions during migration.
Required Qualifications:
- Strong knowledge of Git/GitHub and preferably Perforce (Helix Core) — understanding of differences, workflows, and integrations.
- Hands-on experience with P4-Fusion.
- Familiarity with cloud platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes).
- Proficiency in migration tools such as git-p4 fusion — installation, configuration, and troubleshooting.
- Ability to identify and manage large files using Git LFS to meet GitHub repository size limits.
- Strong scripting skills in Python and Shell for automating migration and restructuring tasks.
- Experience in planning and executing source control migrations — defining scope, branch mapping, history retention, and permission translation.
- Familiarity with CI/CD pipeline integration to validate workflows post-migration.
- Understanding of source code management (SCM) best practices, including version history and repository organization in GitHub.
- Excellent communication and collaboration skills for cross-team coordination and migration planning.
- Proven practical experience in repository migration, large file management, and history preservation during Perforce to GitHub transitions.
Skills: Github, Kubernetes, Perforce, Perforce (Helix Core), Devops Tools
Must-Haves
Git/GitHub (advanced), Perforce (Helix Core) (advanced), Python/Shell scripting (strong), P4-Fusion (hands-on experience), Git LFS (proficient)
We are looking for a Staff Engineer - Python to join one of our engineering teams at our office in Hyderabad.
What would you do?
- Own end-to-end delivery of backend projects from requirements and LLDs to production.
- Lead technical design and execution, ensuring scalability, reliability, and code quality.
- Build and integrate chatbot and AI-driven workflows with third-party systems.
- Diagnose and resolve complex performance and production issues.
- Drive testing, documentation, and engineering best practices.
- Mentor engineers and act as the primary technical point of contact for the project/client.
Who Should Apply?
- 5+ years of hands-on experience building backend systems in Python.
- Proficiency in building web-based applications using Django or similar frameworks.
- In-depth knowledge of the Python stack and API-first system design.
- Experience working with SQL and NoSQL databases including PostgreSQL/MySQL, MongoDB, ElasticSearch, or key-value stores.
- Strong experience owning design, delivery, and technical decision-making.
- Proven ability to lead and mentor engineers through reviews and execution.
- Clear communicator with a high-ownership, delivery-focused mindset.
Nice to Have
- Experience contributing to system-level design discussions.
- Prior exposure to AI/LLM-based systems or conversational platforms.
- Experience working directly with clients or external stakeholders.
- Background in fast-paced product or service environments.
5–10 years of experience in backend or full-stack development (Java, C#, Python, or Node.js preferred).
•Design, develop, and deploy full-stack web applications (front-end, back-end, APIs, and databases).
•Build responsive, user-friendly UIs using modern JavaScript frameworks (React, Vue, or Angular).
•Develop robust backend services and RESTful or GraphQL APIs using Node.js, Python, Java, or similar technologies.
•Manage and optimize databases (SQL and NoSQL).
•Collaborate with UX/UI designers, product managers, and QA engineers to refine requirements and deliver solutions.
•Implement CI/CD pipelines and support cloud deployments (AWS, Azure, or GCP).
•Write clean, testable, and maintainable code with appropriate documentation.
•Monitor performance, identify bottlenecks, and troubleshoot production issues.
•Stay up to date with emerging technologies and recommend improvements to tools, processes, and architecture.
•Proficiency in front-end technologies: HTML5, CSS3, JavaScript/TypeScript, and frameworks like React, Vue.js, or Angular.
•Strong experience with server-side programming (Node.js, Python/Django, Java/Spring Boot, or .NET).
•Experience with databases: PostgreSQL, MySQL, MongoDB, or similar.
•Familiarity with API design, microservices architecture, and REST/GraphQL best practices.
•Working knowledge of version control (Git/GitHub) and DevOps pipelines.
Understanding of cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation testing + Python + AWS)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Develop, maintain, and execute automation test scripts using Python.
- Build reliable and reusable test automation frameworks for web and cloud-based applications.
- Work with AWS cloud services for test execution, environment management, and integration needs.
- Perform functional, regression, and integration testing as part of the QA lifecycle.
- Analyze test failures, identify root causes, raise defects, and collaborate with development teams.
- Participate in requirement review, test planning, and strategy discussions.
- Contribute to CI/CD setup and integration of automation suites.
Required Experience:
- Strong hands-on experience in Automation Testing.
- Proficiency in Python for automation scripting and framework development.
- Understanding and practical exposure to AWS services (Lambda, EC2, S3, CloudWatch, or similar).
- Good knowledge of QA methodologies, SDLC/STLC, and defect management.
- Familiarity with automation tools/frameworks (e.g., Selenium, PyTest).
- Experience with Git or other version control systems.
Good to Have:
- API testing experience (REST, Postman, REST Assured).
- Knowledge of Docker/Kubernetes.
- Exposure to Agile/Scrum environment.
Skills: Automation testing, Python, Java, ETL, AWS
JOB DETAILS:
* Job Title: Tester III - Software Testing (Automation Testing + Python + Azure)
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and execute automation test scripts using Python.
- Build and maintain scalable test automation frameworks.
- Work with Azure DevOps for CI/CD, pipeline automation, and test management.
- Perform functional, regression, and integration testing for web and cloud‑based applications.
- Analyze test results, log defects, and collaborate with developers for timely closure.
- Participate in requirement analysis, test planning, and strategy discussions.
- Ensure test coverage, maintain script quality, and optimize automation suites.
Required Experience:
- Strong hands-on expertise in automation testing for web/cloud applications.
- Solid proficiency in Python for creating automation scripts and frameworks.
- Experience working with Azure services and Azure DevOps pipelines.
- Good understanding of QA methodologies, SDLC/STLC, and defect lifecycle.
- Experience with tools like Selenium, PyTest, or similar frameworks (good to have).
- Familiarity with Git or other version control tools.
Good to Have:
- Experience with API testing (REST, Postman, or similar tools)
- Knowledge of Docker/Kubernetes
- Exposure to Agile/Scrum environments
Skills: automation testing, python, java, azure
JOB DETAILS:
* Job Title: Tester III - Software Testing- Playwright + API testing
* Industry: Global digital transformation solutions provide
* Salary: Best in Industry
* Experience: 4 -10 years
* Location: Hyderabad
Job Description
Responsibilities:
- Design, develop, and maintain automated test scripts for web applications using Playwright.
- Perform API testing using industry-standard tools and frameworks.
- Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
- Analyze test results, identify defects, and track them to closure.
- Participate in requirement reviews, test planning, and test strategy discussions.
- Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.
Required Experience:
- Strong hands-on experience in Automation Testing for web-based applications.
- Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
- Solid experience in API testing (Postman, REST Assured, or similar tools).
- Good understanding of software QA methodologies, tools, and processes.
- Ability to write clear, concise test cases and automation scripts.
- Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.
Good to Have:
- Knowledge of cloud environments (AWS/Azure)
- Experience with version control tools like Git
- Familiarity with Agile/Scrum methodologies
Skills: automation testing, sql, api testing, soap ui testing, playwright
AI-Native Software Developer Intern
Build real AI agents used daily across the company
We’re looking for a high-agency, AI-native software developer intern to help us build internal AI agents that improve productivity across our entire company (80–100 people using them daily).
You will ship real systems, used by real teams, with real impact.
If you’ve never built anything outside coursework, this role is probably not a fit.
What You’ll Work On
You will work directly on designing, building, deploying, and iterating AI agents that power internal workflows.
Examples of problems you may tackle:
Internal AI agents for:
- Knowledge retrieval across Notion / docs / Slack
- Automated report generation
- Customer support assistance
- Process automation (ops, hiring, onboarding, etc.)
- Decision-support copilots
- Prompt engineering + structured outputs + tool-using agents
Building workflows using:
- LLM APIs
- Vector databases
- Agent frameworks
- Internal dashboards
- Improving reliability, latency, cost, and usability of AI systems
- Designing real UX around AI tools (not just scripts)
You will own features end-to-end:
- Problem understanding
- Solution design
- Implementation
- Testing
- Deployment
- Iteration based on user feedback
What We Expect From You
You must:
- Be AI-native: you actively use tools like:
- ChatGPT / Claude / Cursor / Copilot
- AI for debugging, scaffolding, refactoring
- Prompt iteration
- Rapid prototyping
- Be comfortable with at least one programming language (Python, TypeScript, JS, etc.)
- Have strong critical thinking
- You question requirements
- You think about edge cases
- You optimize systems, not just make them “work”
- Be high agency
- You don’t wait for step-by-step instructions
- You proactively propose solutions
- You take ownership of outcomes
- Be able to learn fast on the job
Help will be provided but you will not be spoonfed.
Absolute Requirement (Non-Negotiable)
If you have not built any side projects with a visible output, you will most likely be rejected.
We expect at least one of:
- A deployed web app
- A GitHub repo with meaningful commits
- A working AI tool
- A live demo link
- A product you built and shipped
- An agent, automation, bot, or workflow you created
Bonus Points (Strong Signals)
These are not required but will strongly differentiate you:
- Built projects using:
- LLM APIs (OpenAI, Anthropic, etc.)
- LangChain / LlamaIndex / custom agent frameworks
- Vector DBs like Pinecone, Weaviate, FAISS
- RAG systems
- Experience deploying:
- Vercel, Fly.io, Render, AWS, etc.
- Built internal tools for a team before
- Strong product intuition (you care about UX, not just code)
- Experience automating your own workflows using scripts or AI
What You’ll Gain
You will get:
- Real experience building AI agents used daily
- Ownership over production systems
- Deep exposure to:
- AI architecture
- Product thinking
- Iterative engineering
- Tradeoffs (cost vs latency vs accuracy)
- A portfolio that actually means something in 2026
- A strong shot at long-term roles based on performance
If you perform well, you won’t leave with a certificate, you'll leave with real-world building experience.
Who This Is Perfect For
- People who already build things for fun
- People who automate their own life with scripts/tools
- People who learn by shipping
- People who prefer responsibility over structure
- People who are excited by ambiguity
Who This Is Not For
Be honest with yourself:
- If you need step-by-step instructions
- If you avoid open-ended problems
- If you’ve never built anything outside assignments
- If you dislike using AI tools while coding
This will be frustrating for you.
How To Apply
Send:
- Your GitHub
- Links to projects (deployed preferred)
- A short note explaining:
- What you built
- Why you built it
- What you’d improve if you had more time
Strong portfolios beat strong resumes.
Role Summary
We are looking for a seasoned Python/Django expert with 10–12 years of real-world development experience and a strong background in leading engineering teams. The selected candidate will be responsible for managing complex technical initiatives, mentoring team members, ensuring best coding practices, and partnering closely with cross-functional teams. This position demands deep technical proficiency, strong leadership capability, and exceptional communication skills.
Primary Responsibilities
· Lead, guide, and mentor a team of Python/Django engineers, offering hands-on technical support and direction.
· Architect, design, and deliver secure, scalable, and high-performing web applications.
· Manage the complete software development lifecycle including requirements gathering, system design, development, testing, deployment, and post-launch maintenance.
· Ensure compliance with coding standards, architectural patterns, and established development best practices.
· Collaborate with product teams, QA, UI/UX, and other stakeholders to ensure timely and high-quality product releases.
· Perform detailed code reviews, optimize system performance, and resolve production-level issues.
· Drive engineering improvements such as automation, CI/CD implementation, and modernization of outdated systems.
· Create and maintain technical documentation while providing regular updates to leadership and stakeholders.
Required Skills & Qualifications Negotiable
· 10–14 years of professional experience in software development with strong expertise in Python and Django.
· Solid understanding of key web technologies, including REST APIs, HTML, CSS, and JavaScript.
· Hands-on experience working with relational and NoSQL databases (such as PostgreSQL, MySQL, or MongoDB).
· Familiarity with major cloud platforms (AWS, Azure, or GCP) and container tools like Docker and Kubernetes is a plus.
· Proficient in Git workflows, CI/CD pipelines, and automated testing tools.
· Strong analytical and problem-solving skills, especially in designing scalable and high-availability systems.
· Excellent communication skills—both written and verbal.
· Demonstrated leadership experience in mentoring teams and managing technical deliverables.
· Must be available to work on-site in the Hyderabad office; remote work is not allowed.
Preferred Qualifications
· Experience with microservices, asynchronous frameworks (such as FastAPI or Celery), or event-driven architectures.
· Familiarity with Agile/Scrum methodologies.
· Previous background as a technical lead or engineering manager.
REVIEW CRITERIA:
MANDATORY:
- Strong Hands-On AWS Cloud Engineering / DevOps Profile
- Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
- Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
- Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
- Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
- Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
- Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills
ROLE & RESPONSIBILITIES:
We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.
KEY RESPONSIBILITIES:
- Operate and support AWS production environments across multiple accounts
- Manage infrastructure using Terraform and support CI/CD pipelines
- Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
- Build and manage Docker images and push to Amazon ECR
- Monitor systems using CloudWatch and third-party tools; respond to incidents
- Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
- Assist with cost optimization, tagging, and governance standards
- Automate operational tasks using Python, Lambda, and Systems Manager
IDEAL CANDIDATE:
- Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
- Experience with Terraform and Git-based workflows
- Hands-on experience with Kubernetes / EKS
- Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
- Scripting experience in Python or Bash
- Understanding of monitoring, incident management, and cloud security basics
NICE TO HAVE:
- AWS Associate-level certifications
- Experience with Karpenter, Prometheus, New Relic
- Exposure to FinOps and cost optimization practices
MANDATORY CRITERIA:
- Education: B.Tech / M.Tech in ECE / CSE / IT
- Experience: 10–12 years in hardware board design, system hardware engineering, and full product deployment cycles
- Proven expertise in digital, analog, and power electronic circuit analysis & design
- Strong hands-on experience designing boards with SoCs, FPGAs, CPLDs, and MPSoC architectures
- Deep understanding of signal integrity, EMI/EMC, and high-speed design considerations
- Must have successfully completed at least two hardware product development cycles from high-level design to final deployment
- Ability to independently handle schematic design, design analysis (DC drop, SI), and cross-team design reviews
- Experience in sourcing & procurement of electronic components, PCBs, and mechanical parts for embedded/IoT/industrial hardware
- Strong experience in board bring-up, debugging, issue investigation, and cross-functional triage with firmware/software teams
- Expertise in hardware validation, test planning, test execution, equipment selection, debugging, and report preparation
- Proficiency in Cadence Allegro or Altium EDA tools (mandatory)
- Experience coordinating with layout, mechanical, SI, EMC, manufacturing, and supply chain teams
- Strong understanding of manufacturing services, production pricing models, supply chain, and logistics for electronics/electromechanical components
DESCRIPTION:
COMPANY OVERVIEW:
The company is a semiconductor and embedded system design company with a focus on Embedded, Turnkey ASICs, Mixed Signal IP, Semiconductor & Product Engineering and IoT solutions catering to Aerospace & Defence, Consumer Electronics, Automotive, Medical and Networking & Telecommunications.
REQUIRED SKILLS:
- Extensive experience in hardware board designs and towards multiple product field deployment cycles.
- Strong foundation and expertise in analyzing digital, Analog and power electronic circuits.
- Proficient with SoC, FPGAs, CPLD and MPSOC architecture-based board designs.
- Knowledgeable in signal integrity, EMI/EMC concepts for digital and power electronics.
- Completed at least two project from high-level design to final product level deployment.
- Capable of independently managing product’s schematic, design analysis DC Drop, Signal Integrity, and coordinating reviews with peer of layout, mechanical, SI, and EMC teams.
- Sourcing and procurement of electronic components, PCBs, and mechanical parts for cutting-edge IoT, embedded, and industrial product development.
- Experienced in board bring-up, issue investigation, and triage in collaboration with firmware and software teams.
- Skilled in preparing hardware design documentation, validation test planning, identifying necessary test equipment, test development, execution, debugging, and report preparation.
- Effective communication and interpersonal skills for collaborative work with cross-functional teams, including post-silicon bench validation, BIOS, and driver development/QA.
- Hands-on experience with Cadence Allegro/Altium EDA tools is essential.
- Familiarity with programming and scripting languages like Python and Perl, and experience in test automation is advantageous.
- Should have excellent exposure with coordination of Manufacturing Services, pricing model for production value supply chain & Logistics in electronics and electromechanical components domain.
MANDATORY CRITERIA:
- Education: B.Tech / M.Tech in ECE / CSE / IT
- Experience: 10–12 years in hardware board design, system hardware engineering, and full product deployment cycles
- Proven expertise in digital, analog, and power electronic circuit analysis & design
- Strong hands-on experience designing boards with SoCs, FPGAs, CPLDs, and MPSoC architectures
- Deep understanding of signal integrity, EMI/EMC, and high-speed design considerations
- Must have successfully completed at least two hardware product development cycles from high-level design to final deployment
- Ability to independently handle schematic design, design analysis (DC drop, SI), and cross-team design reviews
- Experience in sourcing & procurement of electronic components, PCBs, and mechanical parts for embedded/IoT/industrial hardware
- Strong experience in board bring-up, debugging, issue investigation, and cross-functional triage with firmware/software teams
- Expertise in hardware validation, test planning, test execution, equipment selection, debugging, and report preparation
- Proficiency in Cadence Allegro or Altium EDA tools (mandatory)
- Experience coordinating with layout, mechanical, SI, EMC, manufacturing, and supply chain teams
- Strong understanding of manufacturing services, production pricing models, supply chain, and logistics for electronics/electromechanical components
DESCRIPTION:
COMPANY OVERVIEW:
The company is a semiconductor and embedded system design company with a focus on Embedded, Turnkey ASICs, Mixed Signal IP, Semiconductor & Product Engineering and IoT solutions catering to Aerospace & Defence, Consumer Electronics, Automotive, Medical and Networking & Telecommunications.
REQUIRED SKILLS:
- Extensive experience in hardware board designs and towards multiple product field deployment cycles.
- Strong foundation and expertise in analyzing digital, Analog and power electronic circuits.
- Proficient with SoC, FPGAs, CPLD and MPSOC architecture-based board designs.
- Knowledgeable in signal integrity, EMI/EMC concepts for digital and power electronics.
- Completed at least two project from high-level design to final product level deployment.
- Capable of independently managing product’s schematic, design analysis DC Drop, Signal Integrity, and coordinating reviews with peer of layout, mechanical, SI, and EMC teams.
- Sourcing and procurement of electronic components, PCBs, and mechanical parts for cutting-edge IoT, embedded, and industrial product development.
- Experienced in board bring-up, issue investigation, and triage in collaboration with firmware and software teams.
- Skilled in preparing hardware design documentation, validation test planning, identifying necessary test equipment, test development, execution, debugging, and report preparation.
- Effective communication and interpersonal skills for collaborative work with cross-functional teams, including post-silicon bench validation, BIOS, and driver development/QA.
- Hands-on experience with Cadence Allegro/Altium EDA tools is essential.
- Familiarity with programming and scripting languages like Python and Perl, and experience in test automation is advantageous.
- Should have excellent exposure with coordination of Manufacturing Services, pricing model for production value supply chain & Logistics in electronics and electromechanical components domain.
Job Title: QA Automation Engineer
Key Responsibilities & Skills:
- Strong hands-on experience in Java automation testing
- Expertise in Selenium for web application automation
- Experience with BDD frameworks using Cucumber (feature files and step definitions)
- Hands-on experience in API automation using Rest Assured
- Working knowledge of Python automation scripting
- Experience with Robot Framework for test automation
- Ability to design, develop, and maintain scalable automation frameworks
- Experience in test execution, reporting, and defect tracking
- Strong analytical, problem-solving, and communication skills
We are seeking a motivated Data Analyst to support business operations by analyzing data, preparing reports, and delivering meaningful insights. The ideal candidate should be comfortable working with data, identifying patterns, and presenting findings in a clear and actionable way.
Key Responsibilities:
- Collect, clean, and organize data from internal and external sources
- Analyze large datasets to identify trends, patterns, and opportunities
- Prepare regular and ad-hoc reports for business stakeholders
- Create dashboards and visualizations using tools like Power BI or Tableau
- Work closely with cross-functional teams to understand data requirements
- Ensure data accuracy, consistency, and quality across reports
- Document data processes and analysis methods
Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python
Criteria:
Looking for 15days and max 30 days of notice period candidates.
looking candidates from Hyderabad location only
Looking candidates from EPAM company only
1.4+ years of software development experience
2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.
3. Hands-on with NATS for event-driven architecture and streaming.
4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.
5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.
6. Proficient in Python (Flask) for building scalable applications and APIs.
7. Focus: Java, Python, Kubernetes, Cloud-native development
8. SQL database
Description
Position Overview
We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.
Key Responsibilities
- Design, develop, and maintain scalable applications using Java and Spring Boot framework
- Build robust web services and APIs using Python and Flask framework
- Implement event-driven architectures using NATS messaging server
- Deploy, manage, and optimize applications in Kubernetes environments
- Develop microservices following best practices and design patterns
- Collaborate with cross-functional teams to deliver high-quality software solutions
- Write clean, maintainable code with comprehensive documentation
- Participate in code reviews and contribute to technical architecture decisions
- Troubleshoot and optimize application performance in containerized environments
- Implement CI/CD pipelines and follow DevOps best practices
Required Qualifications
- Bachelor's degree in Computer Science, Information Technology, or related field
- 4+ years of experience in software development
- Strong proficiency in Java with deep understanding of web technology stack
- Hands-on experience developing applications with Spring Boot framework
- Solid understanding of Python programming language with practical Flask framework experience
- Working knowledge of NATS server for messaging and streaming data
- Experience deploying and managing applications in Kubernetes
- Understanding of microservices architecture and RESTful API design
- Familiarity with containerization technologies (Docker)
- Experience with version control systems (Git)
Skills & Competencies
- Skills Java (Spring Boot, Spring Cloud, Spring Security)
- Python (Flask, SQL Alchemy, REST APIs)
- NATS messaging patterns (pub/sub, request/reply, queue groups)
- Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
- Web technologies (HTTP, REST, WebSocket, gRPC)
- Container orchestration and management
- Soft Skills Problem-solving and analytical thinking
- Strong communication and collaboration
- Self-motivated with ability to work independently
- Attention to detail and code quality
- Continuous learning mindset
- Team player with mentoring capabilities
AI Agent Builder – Internal Functions and Data Platform Development Tools
About the Role:
We are seeking a forward-thinking AI Agent Builder to lead the design, development, and deployment, and usage reporting of Microsoft Copilot and other AI-powered agents across our data platform development tools and internal business functions. This role will be instrumental in driving automation, improving onboarding, and enhancing operational efficiency through intelligent, context-aware assistants.
This role is central to our GenAI transformation strategy. You will help shape the future of how our teams interact with data, reduce administrative burden, and unlock new efficiencies across the organization. Your work will directly contribute to our “Art of the Possible” initiative—demonstrating tangible business value through AI.
You Will:
• Copilot Agent Development: Use Microsoft Copilot Studio and Agent Builder to create, test, and deploy AI agents that automate workflows, answer queries, and support internal teams.
• Data Engineering Enablement: Build agents that assist with data connector scaffolding, pipeline generation, and onboarding support for engineers.
• Knowledge Base Integration: Curate and integrate documentation (e.g., ERDs, connector specs) into Copilot-accessible repositories (SharePoint, Confluence) to support contextual AI responses.
• Prompt Engineering: Design reusable prompt templates and conversational flows to streamline repeated tasks and improve agent usability.
• Tool Evaluation & Integration: Assess and integrate complementary AI tools (e.g., GitLab Duo, Databricks AI, Notebook LM) to extend Copilot capabilities.
• Cross-Functional Collaboration: Partner with product, delivery, PMO, and security teams to identify high-value use cases and scale successful agent implementations.
• Governance & Monitoring: Ensure agents align with Responsible AI principles, monitor performance, and iterate based on feedback and evolving business needs.
• Adoption and Usage Reporting: Use Microsoft Viva Insights and other tools to report on user adoption, usage and business value delivered.
What We're Looking For:
• Proven experience with Microsoft 365 Copilot, Copilot Studio, or similar AI platforms, ChatGPT, Claude, etc.
• Strong understanding of data engineering workflows, tools (e.g., Git, Databricks, Unity Catalog), and documentation practices.
• Familiarity with SharePoint, Confluence, and Microsoft Graph connectors.
• Experience in prompt engineering and conversational UX design.
• Ability to translate business needs into scalable AI solutions.
• Excellent communication and collaboration skills across technical and non-technical
Bonus Points:
• Experience with GitLab Duo, Notebook LM, or other AI developer tools.
• Background in enterprise data platforms, ETL pipelines, or internal business systems.
• Exposure to AI governance, security, and compliance frameworks.
• Prior work in a regulated industry (e.g., healthcare, finance) is a plus.
Required Skills: Advanced Hardware Board Design Expertise, Signal Integrity, EMI/EMC & Design Analysis, Board Bring-Up & Troubleshooting, EDA Tools & Technical Documentation, Cross-Functional & Supply Chain Coordination
Criteria:
- Education: B.Tech / M.Tech in ECE / CSE / IT
- Experience: 10–12 years in hardware board design, system hardware engineering, and full product deployment cycles
- Proven expertise in digital, analog, and power electronic circuit analysis & design
- Strong hands-on experience designing boards with SoCs, FPGAs, CPLDs, and MPSoC architectures
- Deep understanding of signal integrity, EMI/EMC, and high-speed design considerations
- Must have successfully completed at least two hardware product development cycles from high-level design to final deployment
- Ability to independently handle schematic design, design analysis (DC drop, SI), and cross-team design reviews
- Experience in sourcing & procurement of electronic components, PCBs, and mechanical parts for embedded/IoT/industrial hardware
- Strong experience in board bring-up, debugging, issue investigation, and cross-functional triage with firmware/software teams
- Expertise in hardware validation, test planning, test execution, equipment selection, debugging, and report preparation
- Proficiency in Cadence Allegro or Altium EDA tools (mandatory)
- Experience coordinating with layout, mechanical, SI, EMC, manufacturing, and supply chain teams
- Strong understanding of manufacturing services, production pricing models, supply chain, and logistics for electronics/electromechanical components
Description
REQUIRED SKILLS:
• Extensive experience in hardware board designs and towards multiple product field deployment cycle.
• Strong foundation and expertise in analyzing digital, Analog and power electronic circuits.
• Proficient with SoC, FPGAs, CPLD and MPSOC architecture-based board designs.
• Knowledgeable in signal integrity, EMI/EMC concepts for digital and power electronics.
• Completed at least two project from high-level design to final product level deployment.
• Capable of independently managing product’s schematic, design analysis DC Drop, Singal Integrity, and coordinating reviews with peer of layout, mechanical, SI, and EMC teams.
• Sourcing and procurement of electronic components, PCBs, and mechanical parts for cutting-edge IoT, embedded, and industrial product development.
• Experienced in board bring-up, issue investigation, and triage in collaboration with firmware and software teams.
• Skilled in preparing hardware design documentation, validation test planning, identify necessary test equipment, test development, execution, debugging, and report preparation.
• Effective communication and interpersonal skills for collaborative work with cross-functional teams, including post-silicon bench validation, BIOS, and driver development/QA.
• Hands-on experience with Cadence Allegro/Altium EDA tools is essential.
• Familiarity with programming and scripting languages like Python and Perl, and experience in test automation is advantageous.
• Should have excellent exposure with coordination of Manufacturing Services, pricing model for production value supply chain & Logistics in electronics and electromechanical components domain.
Education Requirements:
B. Tech / M. Tech (ECE/ CSE/ IT)
Experience - 10 to 12 Years
Job Description: Python-Azure AI Developer
Experience: 5+ years
Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal
Mandatory Skills:
- Python: Expert-level proficiency with FastAPI/Flask
- Azure Services: Hands-on experience integrating Azure cloud services
- Databases: PostgreSQL, Redis
- AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding
Good to Have:
- Workflow automation tools (n8n or similar)
- Experience with LangChain, AutoGen, or other AI agent frameworks
- Azure OpenAI Service knowledge
Key Responsibilities:
- Develop AI-powered applications using Python and Azure
- Build RESTful APIs with FastAPI/Flask
- Integrate Azure services for AI/ML workloads
- Implement agentic AI solutions
- Database optimization and management
- Workflow automation implementation

Global digital transformation solutions provider.
Job Description – Senior Technical Business Analyst
Location: Trivandrum (Preferred) | Open to any location in India
Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST
About the Role
We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.
As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.
Key Responsibilities
Business & Analytical Responsibilities
- Partner with business teams to understand one-line problem statements and translate them into detailed business requirements, opportunities, and project scope.
- Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
- Create documentation including Business Requirement Documents (BRDs), user stories, process flows, and analytical models.
- Break down business needs into concise, actionable, and development-ready user stories in Jira.
Data & Technical Responsibilities
- Collaborate with data engineering teams to design, review, and validate data pipelines, data models, and ETL/ELT workflows.
- Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
- Apply foundational data science concepts such as statistical analysis, predictive modeling, and machine learning fundamentals.
- Validate and ensure data quality, consistency, and accuracy across datasets and systems.
Collaboration & Execution
- Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
- Assist in development, testing, and rollout of data-driven solutions.
- Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.
Required Skillsets
Core Technical Skills
- 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
- Data Analytics: SQL, descriptive analytics, business problem framing.
- Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
- Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
- Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.
Soft Skills
- Strong analytical thinking and structured problem-solving capability.
- Ability to convert business problems into clear technical requirements.
- Excellent communication, documentation, and presentation skills.
- High curiosity, adaptability, and eagerness to learn new tools and techniques.
Educational Qualifications
- BE/B.Tech or equivalent in:
- Computer Science / IT
- Data Science
What We Look For
- Demonstrated passion for data and analytics through projects and certifications.
- Strong commitment to continuous learning and innovation.
- Ability to work both independently and in collaborative team environments.
- Passion for solving business problems using data-driven approaches.
- Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.
Why Join Us?
- Exposure to modern data platforms, analytics tools, and AI technologies.
- A culture that promotes innovation, ownership, and continuous learning.
- Supportive environment to build a strong career in data and analytics.
Skills: Data Analytics, Business Analysis, Sql
Must-Haves
Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R
Review Criteria
- Strong Senior Data Scientist (AI/ML/GenAI) Profile
- 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
- Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
- 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
- Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows
Preferred
- Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
- Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.
Job Specific Criteria
- CV Attachment is mandatory
- Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
- Are you okay with 3 Days WFO?
- Virtual Interview requires video to be on, are you okay with it?
Role & Responsibilities
Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.
Responsibilities:
- Own the full ML lifecycle: model design, training, evaluation, deployment
- Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
- Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
- Build agentic workflows for reasoning, planning, and decision-making
- Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
- Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
- Collaborate with product and engineering teams to integrate AI models into business applications
- Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices
Ideal Candidate
- 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
- Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
- Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
- Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
- Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
- Strong software engineering background with experience in testing, version control, and APIs
- Proven ability to balance innovation with scalable deployment
- B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
- Bonus: Open-source contributions, GenAI research, or applied systems at scale
We’re Hiring – Automation Test Engineer!
We at Versatile Commerce are looking for passionate Automation Testing Professionals to join our growing team!
📍 Location: Gachibowli, Hyderabad (Work from Office)
💼 Experience: 3 – 5 Years
⏳ Notice Period: Immediate Joiners Preferred
What we’re looking for:
✅ Strong experience in Selenium / Cypress / Playwright
✅ Proficient in Java / Python / JavaScript
✅ Hands-on with TestNG / JUnit / Maven / Jenkins
✅ Experience in API Automation (Postman / REST Assured)
✅ Good understanding of Agile Testing & Defect Management Tools (JIRA, Zephyr)
Role Summary
We are seeking a Full-Stack Developer to build and secure features for our Therapy Planning Software (TPS), which integrates with RMS/RIS, EMR systems, devices (DICOM, Bluetooth, VR, robotics, FES), and supports ICD–ICF–ICHI coding. The role involves ~40% frontend and 60% backend development, with end-to-end responsibility for security across application layers.
Responsibilities
Frontend (40%)
- Build responsive, accessible UI in React + TypeScript (or Angular/Vue).
- Implement multilingual (i18n/l10n) and WCAG 2.1 accessibility standards.
- Develop offline-capable PWAs for home programs.
- Integrate REST/FHIR APIs for patient workflows, scheduling, and reporting.
- Support features like voice-to-text, video capture, and compression.
Backend (60%)
- Design and scale REST APIs using Python (FastAPI/Django).
- Build modules for EMR storage, assessments, therapy plans, and data logging.
- Implement HL7/FHIR endpoints and secure integrations with external EMRs.
- Handle file uploads (virus scanning, HD video compression, secure storage).
- Optimize PostgreSQL schemas and queries for performance.
- Implement RBAC, MFA, PDPA compliance, edit locks, and audit trails.
Security Layer (Ownership)
- Identity & Access: OAuth2/OIDC, JWT, MFA, SSO.
- Data Protection: TLS, AES-256 at rest, field-level encryption, immutable audit logs.
- Compliance: PDPA, HIPAA principles, MDA requirements.
- DevSecOps: Secure coding (OWASP ASVS), dependency scanning, secrets management.
- Monitoring: Logging/metrics (ELK/Prometheus), anomaly detection, DR/BCP preparedness.
Requirements
- Strong skills in Python (FastAPI/Django) and React + TypeScript.
- Experience with HL7/FHIR, EMR data, and REST APIs.
- Knowledge of OAuth2/JWT authentication, RBAC, audit logging.
- Proficiency with PostgreSQL and database optimization.
- Cloud deployment (AWS/Azure) and containerization (Docker/K8s) a plus.
Added Advantage
- Familiarity with ICD, ICF, ICHI coding systems or medical diagnosis workflows.
Success Metrics
- Deliver secure end-to-end features with clinical workflow integration.
- Pass OWASP/ASVS L2 security baseline.
- Establish full audit trail and role-based access across at least one clinical workflow.
📍Company: Versatile Commerce
📍 Position: Data Scientists
📍 Experience: 3-9 yrs
📍 Location: Hyderabad (WFO)
📅 Notice Period: 0- 15 Days
At Pipaltree, we’re building an AI-enabled platform that helps brands understand how they’re truly perceived — not through surveys or static dashboards, but through real conversations happening across the world.
We’re a small team solving deep technical and product challenges: orchestrating large-scale conversation data, applying reasoning and summarization models, and turning this into insights that businesses can trust.
Requirements:
- Deep understanding of distributed systems and asynchronous programming in Python
- Experience with building scalable applications using LLMs or traditional ML techniques
- Experience with Databases, Cache, and Micro services
- Experience with DevOps is a huge plus
Strong Full stack developer Profile
Mandatory (Experience 1) - Must Have Minimum 5+ YOE in Software Development,
Mandatory (Experience 2) - Must have 4+ YOE in backend using Python.
Mandatory (Experience 3) - Must have good experience in frontend using React JS with knowledge of HTML, CSS, and JavaScript.
Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server /

One of the reputed Client in India
Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Must have AWS
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.
Full-Stack Developer
Exp: 5+ years required
Night shift: 8 PM-5 AM/9PM-6 AM
Only Immediate Joinee Can Apply
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
Job Description:
Role: Data Scientist
Responsibilities:
Lead data science and machine learning projects, contributing to model development, optimization and evaluation.
Perform data cleaning, feature engineering, and exploratory data analysis.
Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.
Collaborate with other DS and engineers to deliver projects.
Technical Skills – Must have:
Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.
Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.
Ability to translate complex machine learning problem statements into specific deliverables and requirements.
Should have worked with major cloud platforms such as AWS, Azure or GCP.
Working knowledge of SQL and no-SQL databases.
Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.
Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.
Strong understanding of evaluation and monitoring metrics for machine learning projects.
Role: Data Scientist (Python + R Expertise)
Exp: 8 -12 Years
CTC: up to 30 LPA
Required Skills & Qualifications:
- 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
- Strong expertise in Python and R for data analysis, modeling, and visualization.
- Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
- Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
- Experience with SQL and working with large-scale structured and unstructured data.
- Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
- Excellent analytical, problem-solving, and communication skills.
Preferred Skills:
- Experience with NLP, time series forecasting, or deep learning projects.
- Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
- Experience working in product or data-driven organizations.
- Knowledge of MLOps and model lifecycle management is a plus.
If interested kindly share your updated resume on 82008 31681
Join us to reimagine how businesses integrate data and automate processes – with AI at the core.
About FloData
FloData is re-imagining the iPaaS and Business Process Automation (BPA) space for a new era - one where business teams, not just IT, can integrate data, run automations, and solve ops bottlenecks using intuitive, AI-driven interfaces. We're a small, hands-on team with a deep technical foundation and strong industry connections. Backed by real-world learnings from our earlier platform version, we're now going all-in on building a generative AI-first experience.
The Opportunity
We’re looking for an GenAI Engineer to help build the intelligence layer of our new platform. From designing LLM-powered orchestration flows with LangGraph to building frameworks for evaluation and monitoring with LangSmith, you’ll shape how AI powers real-world enterprise workflows.
If you thrive on working at the frontier of LLM systems engineering, enjoy scaling prototypes into production-grade systems, and want to make AI reliable, explainable, and enterprise-ready - this is your chance to define a category-defining product.
What You'll Do
- Spend ~70% of your time architecting, prototyping, and productionizing AI systems (LLM orchestration, agents, evaluation, observability)
- Develop AI frameworks: orchestration (LangGraph), evaluation/monitoring (LangSmith), vector/graph DBs, and other GenAI infra
- Work with product engineers to seamlessly integrate AI services into frontend and backend workflows
- Build systems for AI evaluation, monitoring, and reliability to ensure trustworthy performance at scale
- Translate product needs into AI-first solutions, balancing rapid prototyping with enterprise-grade robustness
- Stay ahead of the curve by exploring emerging GenAI frameworks, tools, and research for practical application
Must Have
- 3–5 years of engineering experience, with at least 1-2 years in GenAI systems
- Hands-on experience with LangGraph, LangSmith, LangChain, or similar frameworks for orchestration/evaluation
- Deep understanding of LLM workflows: prompt engineering, fine-tuning, RAG, evaluation, monitoring, and observability
- A strong product mindset—comfortable bridging research-level concepts with production-ready business use cases
- Startup mindset: resourceful, pragmatic, and outcome-driven
Good To Have
- Experience integrating AI pipelines with enterprise applications and hybrid infra setups (AWS, on-prem, VPCs)
- Experience building AI-native user experiences (assistants, copilots, intelligent automation flows)
- Familiarity with enterprise SaaS/IT ecosystems (Salesforce, Oracle ERP, Netsuite, etc.)
Why Join Us
- Own the AI backbone of a generational product at the intersection of AI, automation, and enterprise data
- Work closely with founders and leadership with no layers of bureaucracy
- End-to-end ownership of AI systems you design and ship
- Be a thought partner in setting AI-first principles for both tech and culture
- Onsite in Hyderabad, with flexibility when needed
Sounds like you?
We'd love to talk. Apply now or reach out directly to explore this opportunity.
Key Responsibilities
- Design, develop, and maintain scalable microservices and RESTful APIs using Python (Flask, FastAPI, or Django).
- Architect data models for SQL and NoSQL databases (PostgreSQL, ClickHouse, MongoDB, DynamoDB) to optimize performance and reliability.
- Implement efficient and secure data access layers, caching, and indexing strategies.
- Collaborate closely with product and frontend teams to deliver seamless user experiences.
- Build responsive UI components using HTML, CSS, JavaScript, and frameworks like React or Angular.
- Ensure system reliability, observability, and fault tolerance across services.
- Lead code reviews, mentor junior engineers, and promote engineering best practices.
- Contribute to DevOps and CI/CD workflows for smooth deployments and testing automation.
Required Skills & Experience
- 10+ years of professional software development experience.
- Strong proficiency in Python, with deep understanding of OOP, asynchronous programming, and performance optimization.
- Proven expertise in building FAST API based microservices architectures.
- Solid understanding of SQL and NoSQL data modeling, query optimization, and schema design.
- Excellent hands on proficiency in frontend proficiency with HTML, CSS, JavaScript, and a modern framework (React, Angular, or Vue).
- Experience working with cloud platforms (AWS, GCP, or Azure) and containerized deployments (Docker, Kubernetes).
- Familiarity with distributed systems, event-driven architectures, and messaging queues (Kafka, RabbitMQ).
- Excellent problem-solving, communication, and system design skills.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
Position Overview
We are seeking an experienced Solutions Architect to lead the technical design and implementation strategy for our finance automation platform. This role sits at the intersection of business requirements, technical architecture, and implementation excellence. You will be responsible for translating complex Statement of Work (SOW) requirements into comprehensive technical designs while mentoring implementation engineers and driving platform evolution.
Key Responsibilities
Solution Design & Architecture
1. Translate SOW requirements into detailed C4 architecture models and Business Process Canvas documentation
2. Design end-to-end solutions for complex finance automation workflows including reconciliations, book closure, and financial reporting
3. Create comprehensive technical specifications for custom development initiatives
4. Establish architectural standards and best practices for finance domain solutions
Technical Leadership & Mentorship
1. Mentor Implementation Engineers on solution design, technical approaches, and best practices
2. Lead technical reviews and ensure solution quality across all implementations
3. Provide guidance on complex technical challenges and architectural decisions
4. Foster knowledge sharing and technical excellence within the solutions team
Platform Strategy & Development
1. Make strategic decisions on when to push feature development to the Platform Team vs. custom implementation
2. Interface with Implementation Support team to assess platform gaps and enhancement opportunities
3. Collaborate with Program Managers to track and prioritize new platform feature development
4. Contribute to product roadmap discussions based on client requirements and market trends
Client Engagement & Delivery
1. Lead technical discussions with enterprise clients during pre-sales and implementation phases
2. Design scalable solutions that align with client's existing technology stack and future roadmap
3. Ensure solutions comply with financial regulations (Ind AS/IFRS/GAAP) and industry standards
4. Drive technical aspects of complex implementations from design through go-live
Required Qualifications
Technical Expertise
● 8+ years of experience in solution architecture, preferably in fintech or enterprise software
● Strong expertise in system integration, API design, and microservices architecture
● Proficiency in C4 modeling and architectural documentation standards
● Experience with Business Process Management (BPM) and workflow design
● Advanced knowledge of data architecture, ETL pipelines, and real-time data processing
● Strong programming skills in Python, Java, or similar languages
● Experience with cloud platforms (AWS, Azure, GCP) and containerization technologies.
Financial Domain Knowledge
● Deep understanding of finance and accounting principles (Ind AS/IFRS/GAAP)
● Experience with financial systems integration (ERP, GL, AP/AR systems)
● Knowledge of financial reconciliation processes and automation strategies
● Understanding of regulatory compliance requirements in financial services
Leadership & Communication
● Proven experience mentoring technical teams and driving technical excellence
● Strong stakeholder management skills with ability to communicate with C-level executives
● Experience working in agile environments with cross-functional teams
● Excellent technical documentation and presentation skills
Preferred Qualifications
● Master's degree in Computer Science, Engineering, or related technical field
● Experience with finance automation platforms (Blackline, Trintech, Anaplan, etc.)
● Certification in enterprise architecture frameworks (TOGAF, Zachman)
● Experience with data visualization tools (Power BI, Tableau, Looker)
● Background in SaaS platform development and multi-tenant architectures
● Experience with DevOps practices and CI/CD pipeline design
● Knowledge of machine learning applications in finance automation.
Skills & Competencies
Technical Skills
● Solution architecture and system design
● C4 modeling and architectural documentation
● API design and integration patterns
● Cloud-native architecture and microservices
● Data architecture and pipeline design
● Programming and scripting languages
Financial & Business Skills
● Financial process automation
● Business process modeling and optimization
● Regulatory compliance and risk management
● Enterprise software implementation
● Change management and digital transformation
Leadership Skills
● Technical mentorship and team development
● Strategic thinking and decision making
● Cross-functional collaboration
● Client relationship management
● Project and program management
Soft Skills
● Critical thinking and problem-solving
● Cross-functional collaboration
● Task and project management
● Stakeholder management
● Team leadership
● Technical documentation
● Communication with technical and non-technical stakeholders
Mandatory Criteria:
● Looking for candidates who are Solution Architects in Finance from Product Companies.
● The candidate should have worked in Fintech for at least 4–5 years.
● Candidate should have Strong Technical and Architecture skills with Finance Exposure.
● Candidate should be from Product companies.
● Candidate should have 8+ years’ experience in solution architecture, preferably in fintech or enterprise software.
● Candidate should have Proficiency in Python, Java (or similar languages) and hands-on with cloud platforms (AWS/Azure/GCP) & containerization (Docker/Kubernetes).
● Candidate should have Deep knowledge of finance & accounting principles (Ind AS/IFRS/GAAP) and financial system integrations (ERP, GL, AP/AR).
● Candidate should have Expertise in system integration, API design, microservices, and C4 modeling.
● Candidate should have Experience in financial reconciliations, automation strategies, and regulatory compliance.
● Candidate should be Strong in problem-solving, cross-functional collaboration, project management, documentation, and communication.
● Candidate should have Proven experience in mentoring technical teams and driving excellence.


















