Cutshort logo
Python Jobs in Hyderabad

50+ Python Jobs in Hyderabad | Python Job openings in Hyderabad

Apply to 50+ Python Jobs in Hyderabad on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.

icon
CAW.Tech

at CAW.Tech

5 recruiters
Ranjana Singh
Posted by Ranjana Singh
Hyderabad
5 - 8 yrs
Best in industry
skill iconPython
skill iconDjango
skill iconPostgreSQL
MySQL
FastAPI
+1 more

We are looking for a Staff Engineer - Python to join one of our engineering teams at our office in Hyderabad.


What would you do?

  • Own end-to-end delivery of backend projects from requirements and LLDs to production.
  • Lead technical design and execution, ensuring scalability, reliability, and code quality.
  • Build and integrate chatbot and AI-driven workflows with third-party systems.
  • Diagnose and resolve complex performance and production issues.
  • Drive testing, documentation, and engineering best practices.
  • Mentor engineers and act as the primary technical point of contact for the project/client.


Who Should Apply?

  • 5+ years of hands-on experience building backend systems in Python.
  • Proficiency in building web-based applications using Django or similar frameworks.
  • In-depth knowledge of the Python stack and API-first system design.
  • Experience working with SQL and NoSQL databases including PostgreSQL/MySQL, MongoDB, ElasticSearch, or key-value stores.
  • Strong experience owning design, delivery, and technical decision-making.
  • Proven ability to lead and mentor engineers through reviews and execution.
  • Clear communicator with a high-ownership, delivery-focused mindset.


Nice to Have

  • Experience contributing to system-level design discussions.
  • Prior exposure to AI/LLM-based systems or conversational platforms.
  • Experience working directly with clients or external stakeholders.
  • Background in fast-paced product or service environments.
Read more
Auxo AI
Kritika Dhingra
Posted by Kritika Dhingra
Bengaluru (Bangalore), Mumbai, Hyderabad, Gurugram
2 - 8 yrs
₹10L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Data Transformation Tool (DBT)
SQL
skill iconPython
Spark
+1 more

AuxoAI is seeking a skilled and experienced Data Engineer to join our dynamic team. The ideal candidate will have 3-7 years of prior experience in data engineering, with a strong background in working on modern data platforms. This role offers an exciting opportunity to work on diverse projects, collaborating with cross-functional teams to design, build, and optimize data pipelines and infrastructure.


Location : Bangalore, Hyderabad, Mumbai, and Gurgaon


Responsibilities:

· Designing, building, and operating scalable on-premises or cloud data architecture

· Analyzing business requirements and translating them into technical specifications

· Design, develop, and implement data engineering solutions using DBT on cloud platforms (Snowflake, Databricks)

· Design, develop, and maintain scalable data pipelines and ETL processes

· Collaborate with data scientists and analysts to understand data requirements and implement solutions that support analytics and machine learning initiatives.

· Optimize data storage and retrieval mechanisms to ensure performance, reliability, and cost-effectiveness

· Implement data governance and security best practices to ensure compliance and data integrity

· Troubleshoot and debug data pipeline issues, providing timely resolution and proactive monitoring

· Stay abreast of emerging technologies and industry trends, recommending innovative solutions to enhance data engineering capabilities.


Requirements


· Bachelor's or Master's degree in Computer Science, Engineering, or a related field.

· Overall 3+ years of prior experience in data engineering, with a focus on designing and building data pipelines

· Experience of working with DBT to implement end-to-end data engineering processes on Snowflake and Databricks

· Comprehensive understanding of the Snowflake and Databricks ecosystem

· Strong programming skills in languages like SQL and Python or PySpark.

· Experience with data modeling, ETL processes, and data warehousing concepts.

· Familiarity with implementing CI/CD processes or other orchestration tools is a plus.


Read more
Navitas Business Consulting
Solomon Yericherla
Posted by Solomon Yericherla
Hyderabad
5 - 10 yrs
₹15L - ₹22L / yr
skill iconJava
skill iconPython
skill iconAmazon Web Services (AWS)
skill iconJavascript
RESTful APIs
+4 more

5–10 years of experience in backend or full-stack development (Java, C#, Python, or Node.js preferred).

•Design, develop, and deploy full-stack web applications (front-end, back-end, APIs, and databases).

•Build responsive, user-friendly UIs using modern JavaScript frameworks (React, Vue, or Angular).

•Develop robust backend services and RESTful or GraphQL APIs using Node.js, Python, Java, or similar technologies.

•Manage and optimize databases (SQL and NoSQL).

•Collaborate with UX/UI designers, product managers, and QA engineers to refine requirements and deliver solutions.

•Implement CI/CD pipelines and support cloud deployments (AWS, Azure, or GCP).

•Write clean, testable, and maintainable code with appropriate documentation.

•Monitor performance, identify bottlenecks, and troubleshoot production issues.

•Stay up to date with emerging technologies and recommend improvements to tools, processes, and architecture.

•Proficiency in front-end technologies: HTML5, CSS3, JavaScript/TypeScript, and frameworks like React, Vue.js, or Angular.

•Strong experience with server-side programming (Node.js, Python/Django, Java/Spring Boot, or .NET).

•Experience with databases: PostgreSQL, MySQL, MongoDB, or similar.

•Familiarity with API design, microservices architecture, and REST/GraphQL best practices.

•Working knowledge of version control (Git/GitHub) and DevOps pipelines.

Understanding of cloud platforms (AWS, Azure, GCP) and containerization (Docker, Kubernetes).

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 10 yrs
₹8L - ₹20L / yr
Automated testing
skill iconAmazon Web Services (AWS)
skill iconPython
Test Automation (QA)
AWS CloudFormation
+25 more

JOB DETAILS:

* Job Title: Tester III - Software Testing (Automation testing + Python + AWS)

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4 -10 years

* Location: Hyderabad

Job Description

Responsibilities:

  • Develop, maintain, and execute automation test scripts using Python.
  • Build reliable and reusable test automation frameworks for web and cloud-based applications.
  • Work with AWS cloud services for test execution, environment management, and integration needs.
  • Perform functional, regression, and integration testing as part of the QA lifecycle.
  • Analyze test failures, identify root causes, raise defects, and collaborate with development teams.
  • Participate in requirement review, test planning, and strategy discussions.
  • Contribute to CI/CD setup and integration of automation suites.

 

Required Experience:

  • Strong hands-on experience in Automation Testing.
  • Proficiency in Python for automation scripting and framework development.
  • Understanding and practical exposure to AWS services (Lambda, EC2, S3, CloudWatch, or similar).
  • Good knowledge of QA methodologies, SDLC/STLC, and defect management.
  • Familiarity with automation tools/frameworks (e.g., Selenium, PyTest).
  • Experience with Git or other version control systems.

 

Good to Have:

  • API testing experience (REST, Postman, REST Assured).
  • Knowledge of Docker/Kubernetes.
  • Exposure to Agile/Scrum environment.

 

Skills: Automation testing, Python, Java, ETL, AWS

 

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 10 yrs
₹8L - ₹20L / yr
Automated testing
skill iconPython
Web applications
Software Testing (QA)
Systems Development Life Cycle (SDLC)
+18 more

JOB DETAILS:

* Job Title: Tester III - Software Testing (Automation Testing + Python + Azure)

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4 -10 years

* Location: Hyderabad

Job Description

Responsibilities:

  • Design, develop, and execute automation test scripts using Python.
  • Build and maintain scalable test automation frameworks.
  • Work with Azure DevOps for CI/CD, pipeline automation, and test management.
  • Perform functional, regression, and integration testing for web and cloud‑based applications.
  • Analyze test results, log defects, and collaborate with developers for timely closure.
  • Participate in requirement analysis, test planning, and strategy discussions.
  • Ensure test coverage, maintain script quality, and optimize automation suites.


Required Experience:

  • Strong hands-on expertise in automation testing for web/cloud applications.
  • Solid proficiency in Python for creating automation scripts and frameworks.
  • Experience working with Azure services and Azure DevOps pipelines.
  • Good understanding of QA methodologies, SDLC/STLC, and defect lifecycle.
  • Experience with tools like Selenium, PyTest, or similar frameworks (good to have).
  • Familiarity with Git or other version control tools.

 

Good to Have:

  • Experience with API testing (REST, Postman, or similar tools)
  • Knowledge of Docker/Kubernetes
  • Exposure to Agile/Scrum environments

 

Skills: automation testing, python, java, azure

Read more
Global digital transformation solutions provider

Global digital transformation solutions provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 10 yrs
₹8L - ₹20L / yr
Automated testing
Software Testing (QA)
Mobile App Testing (QA)
Web applications
skill iconJavascript
+17 more

JOB DETAILS:

* Job Title: Tester III - Software Testing- Playwright + API testing

* Industry: Global digital transformation solutions provide

* Salary: Best in Industry

* Experience: 4 -10 years

* Location: Hyderabad

Job Description

Responsibilities:

  • Design, develop, and maintain automated test scripts for web applications using Playwright.
  • Perform API testing using industry-standard tools and frameworks.
  • Collaborate with developers, product owners, and QA teams to ensure high-quality releases.
  • Analyze test results, identify defects, and track them to closure.
  • Participate in requirement reviews, test planning, and test strategy discussions.
  • Ensure automation coverage, maintain reusable test frameworks, and optimize execution pipelines.

 

Required Experience:

  • Strong hands-on experience in Automation Testing for web-based applications.
  • Proven expertise in Playwright (JavaScript, TypeScript, or Python-based scripting).
  • Solid experience in API testing (Postman, REST Assured, or similar tools).
  • Good understanding of software QA methodologies, tools, and processes.
  • Ability to write clear, concise test cases and automation scripts.
  • Experience with CI/CD pipelines (Jenkins, GitHub Actions, Azure DevOps) is an added advantage.

 

Good to Have:

  • Knowledge of cloud environments (AWS/Azure)
  • Experience with version control tools like Git
  • Familiarity with Agile/Scrum methodologies

 

Skills: automation testing, sql, api testing, soap ui testing, playwright

Read more
IXG Inc
Hyderabad
0 - 1 yrs
₹20000 - ₹40000 / mo
Design thinking
AI Agents
skill iconPython
skill iconMongoDB
skill iconPostgreSQL
+1 more

AI-Native Software Developer Intern


Build real AI agents used daily across the company

We’re looking for a high-agency, AI-native software developer intern to help us build internal AI agents that improve productivity across our entire company (80–100 people using them daily).


You will ship real systems, used by real teams, with real impact.

If you’ve never built anything outside coursework, this role is probably not a fit.


What You’ll Work On

You will work directly on designing, building, deploying, and iterating AI agents that power internal workflows.

Examples of problems you may tackle:


Internal AI agents for:

  • Knowledge retrieval across Notion / docs / Slack
  • Automated report generation
  • Customer support assistance
  • Process automation (ops, hiring, onboarding, etc.)
  • Decision-support copilots
  • Prompt engineering + structured outputs + tool-using agents

Building workflows using:

  • LLM APIs
  • Vector databases
  • Agent frameworks
  • Internal dashboards
  • Improving reliability, latency, cost, and usability of AI systems
  • Designing real UX around AI tools (not just scripts)

You will own features end-to-end:

  • Problem understanding
  • Solution design
  • Implementation
  • Testing
  • Deployment
  • Iteration based on user feedback


What We Expect From You

You must:

  • Be AI-native: you actively use tools like:
  • ChatGPT / Claude / Cursor / Copilot
  • AI for debugging, scaffolding, refactoring
  • Prompt iteration
  • Rapid prototyping
  • Be comfortable with at least one programming language (Python, TypeScript, JS, etc.)
  • Have strong critical thinking
  • You question requirements
  • You think about edge cases
  • You optimize systems, not just make them “work”
  • Be high agency
  • You don’t wait for step-by-step instructions
  • You proactively propose solutions
  • You take ownership of outcomes
  • Be able to learn fast on the job

Help will be provided but you will not be spoonfed.


Absolute Requirement (Non-Negotiable)

If you have not built any side projects with a visible output, you will most likely be rejected.

We expect at least one of:

  • A deployed web app
  • A GitHub repo with meaningful commits
  • A working AI tool
  • A live demo link
  • A product you built and shipped
  • An agent, automation, bot, or workflow you created


Bonus Points (Strong Signals)

These are not required but will strongly differentiate you:

  • Built projects using:
  • LLM APIs (OpenAI, Anthropic, etc.)
  • LangChain / LlamaIndex / custom agent frameworks
  • Vector DBs like Pinecone, Weaviate, FAISS
  • RAG systems
  • Experience deploying:
  • Vercel, Fly.io, Render, AWS, etc.
  • Built internal tools for a team before
  • Strong product intuition (you care about UX, not just code)
  • Experience automating your own workflows using scripts or AI


What You’ll Gain

You will get:

  • Real experience building AI agents used daily
  • Ownership over production systems
  • Deep exposure to:
  • AI architecture
  • Product thinking
  • Iterative engineering
  • Tradeoffs (cost vs latency vs accuracy)
  • A portfolio that actually means something in 2026
  • A strong shot at long-term roles based on performance

If you perform well, you won’t leave with a certificate, you'll leave with real-world building experience.


Who This Is Perfect For

  • People who already build things for fun
  • People who automate their own life with scripts/tools
  • People who learn by shipping
  • People who prefer responsibility over structure
  • People who are excited by ambiguity

Who This Is Not For

Be honest with yourself:

  • If you need step-by-step instructions
  • If you avoid open-ended problems
  • If you’ve never built anything outside assignments
  • If you dislike using AI tools while coding

This will be frustrating for you.


How To Apply

Send:

  • Your GitHub
  • Links to projects (deployed preferred)
  • A short note explaining:
  • What you built
  • Why you built it
  • What you’d improve if you had more time

Strong portfolios beat strong resumes.

Read more
Startup

Startup

Agency job
via Techno Wise by Chanchal Amin
Hyderabad
2 - 6 yrs
₹8L - ₹11L / yr
skill iconPython
skill iconDjango
skill iconFlask
FastAPI
SQL
+6 more

Required Skills and Qualifications:

  • 2–3 years of professional experience in Python development.
  • Strong understanding of object-oriented programming.
  • Experience with frameworks such as DjangoFlask, or FastAPI.
  • Knowledge of REST APIsJSON, and web integration.
  • Familiarity with SQL and database management systems.
  • Experience with Git or other version control tools.
  • Good problem-solving and debugging skills.
  • Strong communication and teamwork abilities.


Read more
OpsTree Solutions

at OpsTree Solutions

4 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
4yrs+
Upto ₹30L / yr (Varies
)
skill iconPython
skill iconAmazon Web Services (AWS)
EKS
skill iconKubernetes
DevOps
+3 more

Key Responsibilities:

  • Lead the architecture, design, and implementation of scalable, secure, and highly available AWS infrastructure leveraging services such as VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, and Secrets Manager.
  • Develop and maintain reusable, modular IaC frameworks using Terraform and Terragrunt, and mentor team members on IaC best practices.
  • Drive automation of infrastructure provisioning, deployment workflows, and routine operations through advanced Python scripting.
  • Take ownership of cost optimization strategy by analyzing usage patterns, identifying savings opportunities, and implementing guardrails across multiple AWS environments.
  • Define and enforce infrastructure governance, including secure access controls, encryption policies, and secret management mechanisms.
  • Collaborate cross-functionally with development, QA, and operations teams to streamline and scale CI/CD pipelines for containerized microservices on Kubernetes (EKS).
  • Establish monitoring, alerting, and observability practices to ensure platform health, resilience, and performance.
  • Serve as a technical mentor and thought leader, guiding junior engineers and shaping cloud adoption and DevOps culture across the organization.
  • Evaluate emerging technologies and tools, recommending improvements to enhance system performance, reliability, and developer productivity.
  • Ensure infrastructure complies with security, regulatory, and operational standards, and drive initiatives around audit readiness and compliance.

Mandatory Skills & Experience:

  • AWS (Advanced Expertise): VPC, EC2, IAM, S3, SNS/SQS, EKS, KMS, Secrets Management
  • Infrastructure as Code: Extensive experience with Terraform and Terragrunt, including module design and IaC strategy
  • Strong hold in Kubernetes
  • Scripting & Automation: Proficient in Python, with a strong track record of building tools, automating workflows, and integrating cloud services
  • Cloud Cost Optimization: Proven ability to analyze cloud spend and implement sustainable cost control strategies
  • Leadership: Experience in leading DevOps/infrastructure teams or initiatives, mentoring engineers, and making architecture-level decisions

Nice to Have:

  • Experience designing or managing CI/CD pipelines for Kubernetes-based environments
  • Backend development background in Python (e.g., FastAPI, Flask)
  • Familiarity with monitoring/observability tools such as Prometheus, Grafana, CloudWatch
  • Understanding of system performance tuning, capacity planning, and scalability best practices
  • Exposure to compliance standards such as SOC 2, HIPAA, or ISO 27001
Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Hyderabad, Indore, Ahmedabad
4 - 11 yrs
Upto ₹30L / yr (Varies
)
SQL
Snowflake
Airflow
skill iconPython

About Kanerika:

Kanerika Inc. is a premier global software products and services firm that specializes in providing innovative solutions and services for data-driven enterprises. Our focus is to empower businesses to achieve their digital transformation goals and maximize their business impact through the effective use of data and AI.


We leverage cutting-edge technologies in data analytics, data governance, AI-ML, GenAI/ LLM and industry best practices to deliver custom solutions that help organizations optimize their operations, enhance customer experiences, and drive growth.


Awards and Recognitions:

Kanerika has won several awards over the years, including:

1. Best Place to Work 2023 by Great Place to Work®

2. Top 10 Most Recommended RPA Start-Ups in 2022 by RPA Today

3. NASSCOM Emerge 50 Award in 2014

4. Frost & Sullivan India 2021 Technology Innovation Award for its Kompass composable solution architecture

5. Kanerika has also been recognized for its commitment to customer privacy and data security, having achieved ISO 27701, SOC2, and GDPR compliances.


Working for us:

Kanerika is rated 4.6/5 on Glassdoor, for many good reasons. We truly value our employees' growth, well-being, and diversity, and people’s experiences bear this out. At Kanerika, we offer a host of enticing benefits that create an environment where you can thrive both personally and professionally. From our inclusive hiring practices and mandatory training on creating a safe work environment to our flexible working hours and generous parental leave, we prioritize the well-being and success of our employees.


Our commitment to professional development is evident through our mentorship programs, job training initiatives, and support for professional certifications. Additionally, our company-sponsored outings and various time-off benefits ensure a healthy work-life balance. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you’ll get while working for Kanerika.


Role Responsibilities: 

Following are high level responsibilities that you will play but not limited to: 

  • Design, development, and implementation of modern data pipelines, data models, and ETL/ELT processes.
  • Architect and optimize data lake and warehouse solutions using Microsoft Fabric, Databricks, or Snowflake.
  • Enable business analytics and self-service reporting through Power BI and other visualization tools.
  • Collaborate with data scientists, analysts, and business users to deliver reliable and high-performance data solutions.
  • Implement and enforce best practices for data governance, data quality, and security.
  • Mentor and guide junior data engineers; establish coding and design standards.
  • Evaluate emerging technologies and tools to continuously improve the data ecosystem.


Required Qualifications:

  • Bachelor's degree in computer science, Information Technology, Engineering, or a related field.
  • Bachelor’s/ Master’s degree in Computer Science, Information Technology, Engineering, or related field.
  • 4-11 years of experience in data engineering or data platform development
  • Strong hands-on experience in SQL, Snowflake, Python, and Airflow
  • Solid understanding of data modeling, data governance, security, and CI/CD practices.

Preferred Qualifications:

  • Familiarity with data modeling techniques and practices for Power BI.
  • Knowledge of Azure Databricks or other data processing frameworks.
  • Knowledge of Microsoft Fabric or other Cloud Platforms.


What we need?

· B. Tech computer science or equivalent.


Why join us?

  • Work with a passionate and innovative team in a fast-paced, growth-oriented environment.
  • Gain hands-on experience in content marketing with exposure to real-world projects.
  • Opportunity to learn from experienced professionals and enhance your marketing skills.
  • Contribute to exciting initiatives and make an impact from day one.
  • Competitive stipend and potential for growth within the company.
  • Recognized for excellence in data and AI solutions with industry awards and accolades.


Employee Benefits:

1. Culture:

  • Open Door Policy: Encourages open communication and accessibility to management.
  • Open Office Floor Plan: Fosters a collaborative and interactive work environment.
  • Flexible Working Hours: Allows employees to have flexibility in their work schedules.
  • Employee Referral Bonus: Rewards employees for referring qualified candidates.
  • Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.


2. Inclusivity and Diversity:

  • Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
  • Mandatory POSH training: Promotes a safe and respectful work environment.


3. Health Insurance and Wellness Benefits:

  • GMC and Term Insurance: Offers medical coverage and financial protection.
  • Health Insurance: Provides coverage for medical expenses.
  • Disability Insurance: Offers financial support in case of disability.


4. Child Care & Parental Leave Benefits:

  • Company-sponsored family events: Creates opportunities for employees and their families to bond.
  • Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
  • Family Medical Leave: Offers leave for employees to take care of family members' medical needs.


5. Perks and Time-Off Benefits:

  • Company-sponsored outings: Organizes recreational activities for employees.
  • Gratuity: Provides a monetary benefit as a token of appreciation.
  • Provident Fund: Helps employees save for retirement.
  • Generous PTO: Offers more than the industry standard for paid time off.
  • Paid sick days: Allows employees to take paid time off when they are unwell.
  • Paid holidays: Gives employees paid time off for designated holidays.
  • Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.


6. Professional Development Benefits:

  • L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
  • Mentorship Program: Offers guidance and support from experienced professionals.
  • Job Training: Provides training to enhance job-related skills.
  • Professional Certification Reimbursements: Assists employees in obtaining professional   certifications.
  • Promote from Within: Encourages internal growth and advancement opportunities.
Read more
Hyderabad
10 - 14 yrs
₹30L - ₹38L / yr
skill iconPython
skill iconDjango
RESTful APIs
NOSQL Databases
Communication Skills
+1 more

Role Summary

We are looking for a seasoned Python/Django expert with 10–12 years of real-world development experience and a strong background in leading engineering teams. The selected candidate will be responsible for managing complex technical initiatives, mentoring team members, ensuring best coding practices, and partnering closely with cross-functional teams. This position demands deep technical proficiency, strong leadership capability, and exceptional communication skills.

Primary Responsibilities

· Lead, guide, and mentor a team of Python/Django engineers, offering hands-on technical support and direction.

· Architect, design, and deliver secure, scalable, and high-performing web applications.

· Manage the complete software development lifecycle including requirements gathering, system design, development, testing, deployment, and post-launch maintenance.

· Ensure compliance with coding standards, architectural patterns, and established development best practices.

· Collaborate with product teams, QA, UI/UX, and other stakeholders to ensure timely and high-quality product releases.

· Perform detailed code reviews, optimize system performance, and resolve production-level issues.

· Drive engineering improvements such as automation, CI/CD implementation, and modernization of outdated systems.

· Create and maintain technical documentation while providing regular updates to leadership and stakeholders.

Required Skills & Qualifications Negotiable

· 10–14 years of professional experience in software development with strong expertise in Python and Django.

· Solid understanding of key web technologies, including REST APIs, HTML, CSS, and JavaScript.

· Hands-on experience working with relational and NoSQL databases (such as PostgreSQL, MySQL, or MongoDB).

· Familiarity with major cloud platforms (AWS, Azure, or GCP) and container tools like Docker and Kubernetes is a plus.

· Proficient in Git workflows, CI/CD pipelines, and automated testing tools.

· Strong analytical and problem-solving skills, especially in designing scalable and high-availability systems.

· Excellent communication skills—both written and verbal.

· Demonstrated leadership experience in mentoring teams and managing technical deliverables.

· Must be available to work on-site in the Hyderabad office; remote work is not allowed.

Preferred Qualifications

· Experience with microservices, asynchronous frameworks (such as FastAPI or Celery), or event-driven architectures.

· Familiarity with Agile/Scrum methodologies.

· Previous background as a technical lead or engineering manager.

Read more
IT Services & Staffing Solutions Industry

IT Services & Staffing Solutions Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
12 - 14 yrs
₹29L - ₹38L / yr
skill iconAmazon Web Services (AWS)
DevOps
Terraform
Troubleshooting
Amazon VPC
+16 more

REVIEW CRITERIA:

MANDATORY:

  • Strong Hands-On AWS Cloud Engineering / DevOps Profile
  • Mandatory (Experience 1): Must have 12+ years of experience in AWS Cloud Engineering / Cloud Operations / Application Support
  • Mandatory (Experience 2): Must have strong hands-on experience supporting AWS production environments (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Mandatory (Infrastructure as a code): Must have hands-on Infrastructure as Code experience using Terraform in production environments
  • Mandatory (AWS Networking): Strong understanding of AWS networking and connectivity (VPC design, routing, NAT, load balancers, hybrid connectivity basics)
  • Mandatory (Cost Optimization): Exposure to cost optimization and usage tracking in AWS environments
  • Mandatory (Core Skills): Experience handling monitoring, alerts, incident management, and root cause analysis
  • Mandatory (Soft Skills): Strong communication skills and stakeholder coordination skills


ROLE & RESPONSIBILITIES:

We are looking for a hands-on AWS Cloud Engineer to support day-to-day cloud operations, automation, and reliability of AWS environments. This role works closely with the Cloud Operations Lead, DevOps, Security, and Application teams to ensure stable, secure, and cost-effective cloud platforms.


KEY RESPONSIBILITIES:

  • Operate and support AWS production environments across multiple accounts
  • Manage infrastructure using Terraform and support CI/CD pipelines
  • Support Amazon EKS clusters, upgrades, scaling, and troubleshooting
  • Build and manage Docker images and push to Amazon ECR
  • Monitor systems using CloudWatch and third-party tools; respond to incidents
  • Support AWS networking (VPCs, NAT, Transit Gateway, VPN/DX)
  • Assist with cost optimization, tagging, and governance standards
  • Automate operational tasks using Python, Lambda, and Systems Manager


IDEAL CANDIDATE:

  • Strong hands-on AWS experience (EC2, VPC, IAM, S3, ALB, CloudWatch)
  • Experience with Terraform and Git-based workflows
  • Hands-on experience with Kubernetes / EKS
  • Experience with CI/CD tools (GitHub Actions, Jenkins, etc.)
  • Scripting experience in Python or Bash
  • Understanding of monitoring, incident management, and cloud security basics


NICE TO HAVE:

  • AWS Associate-level certifications
  • Experience with Karpenter, Prometheus, New Relic
  • Exposure to FinOps and cost optimization practices
Read more
Xenspire

at Xenspire

1 candid answer
Ariba Khan
Posted by Ariba Khan
Hyderabad
5 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
skill iconJava
Generative AI
Large Language Models (LLM)

About the company:

At Xenspire Technologies Pvt. Ltd., we are building People-First AI products—AI that augments human decision-making, reduces cognitive load, and earns trust through transparency, control, and reliability.


As a Lead Engineer in the Founding Engineering Team, youʼll help set the technical direction and build the core systems that everything else will run on—product architecture, engineering standards, and AI-first capabilities embedded into real workflows. This is a challenging environment: short feedback loops, meaningful ownership, and problems that donʼt come with a playbook.


If you like shipping fast, thinking deeply, and building systems that scale from day one, youʼll fit right in.


What Youʼll Do:

  • Build and own core systems (web + backend + data) from scratch—designed to scale.
  • Develop People-First AI capabilities: copilots, semantic search, automated workflows, and decision support—designed with guardrails, explainability, and human-in-the-loop controls.
  • Drive architecture decisions: APIs, database design, eventing, caching, security basics, observability, and performance.
  • Convert ambiguous business needs into clean product experiences with strong engineering discipline.
  • Establish engineering standards: code quality, reviews, CI/CD, testing strategy, release readiness.
  • Mentor engineers through example—this is a hands-on role, not a coordination role.
  • Partner closely with founders/product/design; make trade-offs and ship outcomes, not just output.


What Weʼre Looking For:

  • 5–8 years building production-grade software, ideally in product companies or high-growth startups.
  • Strong expertise in Backend: Python/Java, APIs, scalability patterns Databases: PostgreSQL/MySQL + one NoSQL/search system (Elastic/OpenSearch/Vector DB is a plus)
  • Proven experience building platforms/products from zero 1, then stabilizing for scale.
  • Practical AI experience (not just demos): LLM integrations, prompt/tooling patterns, evaluation, safety/guardrails RAG/semantic search, embeddings, vector stores, reranking, data pipelines
  • High ownership mindset; comfortable with ambiguity, tight timelines, and strong accountability.
  • Strong communication—clear docs, crisp decisions, visible trade-offs.
  • Experience working on SAAS Products or SAAS platforms


Nice to Have:

  • Multi-tenant SaaS experience, RBAC, audit logs, and security-first design patterns.
  • Cloud familiarity: AWS/GCP/Azure, containers, basic infra-as-code, observability tooling.
  • Experience shipping AI features with measurable quality (latency, accuracy, cost, adoption)
Read more
Semi-Conductor Industry

Semi-Conductor Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
10 - 12 yrs
₹30L - ₹35L / yr
Signal integrity
Systems analysis and design
Schematic
skill iconPython
Perl
+113 more

MANDATORY CRITERIA:

  • Education: B.Tech / M.Tech in ECE / CSE / IT
  • Experience: 10–12 years in hardware board design, system hardware engineering, and full product deployment cycles
  • Proven expertise in digital, analog, and power electronic circuit analysis & design
  • Strong hands-on experience designing boards with SoCs, FPGAs, CPLDs, and MPSoC architectures
  • Deep understanding of signal integrity, EMI/EMC, and high-speed design considerations
  • Must have successfully completed at least two hardware product development cycles from high-level design to final deployment
  • Ability to independently handle schematic design, design analysis (DC drop, SI), and cross-team design reviews
  • Experience in sourcing & procurement of electronic components, PCBs, and mechanical parts for embedded/IoT/industrial hardware
  • Strong experience in board bring-up, debugging, issue investigation, and cross-functional triage with firmware/software teams
  • Expertise in hardware validation, test planning, test execution, equipment selection, debugging, and report preparation
  • Proficiency in Cadence Allegro or Altium EDA tools (mandatory)
  • Experience coordinating with layout, mechanical, SI, EMC, manufacturing, and supply chain teams
  • Strong understanding of manufacturing services, production pricing models, supply chain, and logistics for electronics/electromechanical components


DESCRIPTION:

COMPANY OVERVIEW:

The company is a semiconductor and embedded system design company with a focus on Embedded, Turnkey ASICs, Mixed Signal IP, Semiconductor & Product Engineering and IoT solutions catering to Aerospace & Defence, Consumer Electronics, Automotive, Medical and Networking & Telecommunications.


REQUIRED SKILLS:

  • Extensive experience in hardware board designs and towards multiple product field deployment cycles.
  • Strong foundation and expertise in analyzing digital, Analog and power electronic circuits.
  • Proficient with SoC, FPGAs, CPLD and MPSOC architecture-based board designs.
  • Knowledgeable in signal integrity, EMI/EMC concepts for digital and power electronics.
  • Completed at least two project from high-level design to final product level deployment.
  • Capable of independently managing product’s schematic, design analysis DC Drop, Signal Integrity, and coordinating reviews with peer of layout, mechanical, SI, and EMC teams.
  • Sourcing and procurement of electronic components, PCBs, and mechanical parts for cutting-edge IoT, embedded, and industrial product development.
  • Experienced in board bring-up, issue investigation, and triage in collaboration with firmware and software teams.
  • Skilled in preparing hardware design documentation, validation test planning, identifying necessary test equipment, test development, execution, debugging, and report preparation.
  • Effective communication and interpersonal skills for collaborative work with cross-functional teams, including post-silicon bench validation, BIOS, and driver development/QA.
  • Hands-on experience with Cadence Allegro/Altium EDA tools is essential.
  • Familiarity with programming and scripting languages like Python and Perl, and experience in test automation is advantageous.
  • Should have excellent exposure with coordination of Manufacturing Services, pricing model for production value supply chain & Logistics in electronics and electromechanical components domain.
Read more
Semi-Conductor Industry

Semi-Conductor Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad, Bengaluru (Bangalore)
10 - 12 yrs
₹30L - ₹35L / yr
skill iconPython
Perl
EDA
Test Automation (QA)
Supply Chain Management (SCM)
+90 more

MANDATORY CRITERIA:

  • Education: B.Tech / M.Tech in ECE / CSE / IT
  • Experience: 10–12 years in hardware board design, system hardware engineering, and full product deployment cycles
  • Proven expertise in digital, analog, and power electronic circuit analysis & design
  • Strong hands-on experience designing boards with SoCs, FPGAs, CPLDs, and MPSoC architectures
  • Deep understanding of signal integrity, EMI/EMC, and high-speed design considerations
  • Must have successfully completed at least two hardware product development cycles from high-level design to final deployment
  • Ability to independently handle schematic design, design analysis (DC drop, SI), and cross-team design reviews
  • Experience in sourcing & procurement of electronic components, PCBs, and mechanical parts for embedded/IoT/industrial hardware
  • Strong experience in board bring-up, debugging, issue investigation, and cross-functional triage with firmware/software teams
  • Expertise in hardware validation, test planning, test execution, equipment selection, debugging, and report preparation
  • Proficiency in Cadence Allegro or Altium EDA tools (mandatory)
  • Experience coordinating with layout, mechanical, SI, EMC, manufacturing, and supply chain teams
  • Strong understanding of manufacturing services, production pricing models, supply chain, and logistics for electronics/electromechanical components


DESCRIPTION:

COMPANY OVERVIEW:

The company is a semiconductor and embedded system design company with a focus on Embedded, Turnkey ASICs, Mixed Signal IP, Semiconductor & Product Engineering and IoT solutions catering to Aerospace & Defence, Consumer Electronics, Automotive, Medical and Networking & Telecommunications.


REQUIRED SKILLS:

  • Extensive experience in hardware board designs and towards multiple product field deployment cycles.
  • Strong foundation and expertise in analyzing digital, Analog and power electronic circuits.
  • Proficient with SoC, FPGAs, CPLD and MPSOC architecture-based board designs.
  • Knowledgeable in signal integrity, EMI/EMC concepts for digital and power electronics.
  • Completed at least two project from high-level design to final product level deployment.
  • Capable of independently managing product’s schematic, design analysis DC Drop, Signal Integrity, and coordinating reviews with peer of layout, mechanical, SI, and EMC teams.
  • Sourcing and procurement of electronic components, PCBs, and mechanical parts for cutting-edge IoT, embedded, and industrial product development.
  • Experienced in board bring-up, issue investigation, and triage in collaboration with firmware and software teams.
  • Skilled in preparing hardware design documentation, validation test planning, identifying necessary test equipment, test development, execution, debugging, and report preparation.
  • Effective communication and interpersonal skills for collaborative work with cross-functional teams, including post-silicon bench validation, BIOS, and driver development/QA.
  • Hands-on experience with Cadence Allegro/Altium EDA tools is essential.
  • Familiarity with programming and scripting languages like Python and Perl, and experience in test automation is advantageous.
  • Should have excellent exposure with coordination of Manufacturing Services, pricing model for production value supply chain & Logistics in electronics and electromechanical components domain.
Read more
Aptroid Consulting

at Aptroid Consulting

1 candid answer
Eman Khan
Posted by Eman Khan
Hyderabad
7 - 12 yrs
Upto ₹47L / yr (Varies
)
skill iconPython
skill iconAngular (2+)
skill iconKubernetes
SQL

About the company:

Aptroid Consulting (India) Pvt Ltd is a Web Development company focused on helping marketers transforms the customer experience increasing engagement and driving revenue, customer data to inform and drive it in every interaction in real time and with each individual behavior possibly.


About the Role:

We are hiring for Senior Full Stack Developers to strengthen the LiveIntent engineering team. The role requires strong backend depth combined with solid frontend expertise to build and scale high-performance, data-intensive systems.


Candidates are expected to demonstrate excellent analytical and problem-solving skills, along with strong system design capabilities for large-scale, distributed applications. Prior experience in AdTech or similar high-throughput domains is highly desirable.


Required Skills & Experience:

  • 7–12 years of hands-on experience in full-stack development
  • Strong proficiency in Python with Django (ORM, REST APIs, performance tuning)
  • Solid experience with Angular (modern versions, component architecture)
  • Hands-on experience with Docker and Kubernetes in production environments
  • Strong understanding of MySQL, including query optimization and schema design
  • Experience using Datadog for monitoring, metrics, and observability
  • Excellent analytical, problem-solving, and debugging skills
  • Proven experience in system design for scalable, distributed systems


Good to Haves:

  • Experience with Node.js
  • Strong background in database schema design and data modeling
  • Prior experience working in AdTech / MarTech / digital advertising platforms
  • Exposure to event-driven systems, real-time data pipelines, or high-volume traffic systems
  • Experience with CI/CD pipelines and cloud platforms (AWS)


Key Responsibilities:

  • Design, develop, and maintain scalable full-stack applications using Python (Django) and Angular
  • Build and optimize backend services handling large data volumes and high request throughput
  • Design and implement RESTful APIs with a focus on performance, security, and reliability
  • Lead and contribute to system design discussions covering scalability, fault tolerance, and observability
  • Containerize applications using Docker and deploy/manage workloads on Kubernetes
  • Design, optimize, and maintain MySQL database schemas, queries, and indexes
  • Implement monitoring, logging, and alerting using Datadog •
  • Perform deep debugging and root-cause analysis of complex production issues
  • Collaborate with product, platform, and data teams to deliver business-critical features
  • Mentor junior engineers and promote engineering best practices 
Read more
Moolya Software Testing Private Limited
Durga Anand
Posted by Durga Anand
Hyderabad
5 - 8 yrs
₹5L - ₹23L / yr
skill iconJava
Selenium
BDD
Cucumber
restassured
+2 more

Job Title: QA Automation Engineer

Key Responsibilities & Skills:

  • Strong hands-on experience in Java automation testing
  • Expertise in Selenium for web application automation
  • Experience with BDD frameworks using Cucumber (feature files and step definitions)
  • Hands-on experience in API automation using Rest Assured
  • Working knowledge of Python automation scripting
  • Experience with Robot Framework for test automation
  • Ability to design, develop, and maintain scalable automation frameworks
  • Experience in test execution, reporting, and defect tracking
  • Strong analytical, problem-solving, and communication skills
Read more
Bits In Glass

at Bits In Glass

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad, Pune, Mohali
5 - 8 yrs
Upto ₹30L / yr (Varies
)
skill iconJava
skill iconPython
CI/CD
skill iconReact.js
skill iconAngular (2+)

Design, build, and operate end-to-end web and API solutions (front end + back end) with strong automation, observability, and production reliability. You will own features from concept through deployment and steady state, including incident response and continuous improvement.


Key Responsibilities:

Engineering & Delivery

  • Translate business requirements into technical designs, APIs, and data models.
  • Develop back-end services using Java and Python, and front-end components using React / Angular / Vue (where applicable).
  • Build REST / GraphQL APIs, batch jobs, streaming jobs, and system integration adapters.
  • Write efficient SQL/NoSQL queries; optimize schemas, indexes, and data flows (ETL / CDC as needed).

Automation, CI/CD & Operations

  • Automate builds, testing, packaging, and deployments using CI/CD pipelines.
  • Create Linux shell and Python scripts for operational tasks, environment automation, and diagnostics.
  • Manage configuration, feature flags, environment parity, and Infrastructure as Code (where applicable).

Reliability, Security & Quality

  • Embed security best practices: authentication/authorization, input validation, secrets management, TLS.
  • Implement unit, integration, contract, and performance tests with enforced quality gates.
  • Add observability: structured logs, metrics, traces, health checks, dashboards, and alerts.
  • Apply resilience patterns: retries, timeouts, circuit breakers, and graceful degradation.

Production Ownership

  • Participate in on-call rotations, incident triage, RCA, and permanent fixes.
  • Refactor legacy code and reduce technical debt with measurable impact.
  • Maintain technical documentation, runbooks, and architecture decision records (ADRs).

Collaboration & Leadership

  • Mentor peers and contribute to engineering standards and best practices.
  • Work closely with Product, QA, Security, and Ops to balance scope, risk, and timelines.

Qualifications

Must Have

  • Strong experience in Java (core concepts, concurrency, REST frameworks).
  • Strong Python experience (services + scripting).
  • Solid Linux skills with automation using shell/Python.
  • Web services expertise: REST/JSON, API design, versioning, pagination, error handling.
  • Databases: Relational (SQL tuning, transactions) plus exposure to NoSQL / caching (Redis).
  • CI/CD tools: Git, pipelines, artifact management.
  • Testing frameworks: JUnit, PyTest, API testing tools.
  • Observability tools: Prometheus, Grafana, ELK, OpenTelemetry (or equivalents).
  • Strong production support mindset with incident management, SLA/SLO awareness, and RCA experience.

Good to Have

  • Messaging & streaming platforms: Kafka, MQ.
  • Infrastructure as Code: Terraform, Ansible.
  • Cloud exposure: AWS / Azure / GCP, including managed data services.
  • Front-end experience with React / Angular / Vue and TypeScript.
  • Deployment strategies: feature flags, canary, blue/green.
  • Knowledge of cost optimization and capacity planning.

Key Performance Indicators (KPIs)

  • Deployment frequency & change failure rate
  • Mean Time to Detect (MTTD) & Mean Time to Recover (MTTR)
  • API latency (p95) and availability vs SLOs
  • Defect escape rate & automated test coverage
  • Technical debt reduction (items resolved per quarter)
  • Incident recurrence trend (continuous reduction)

Soft Skills

  • End-to-end ownership mindset
  • Data-driven decision making
  • Bias for automation and simplification
  • Proactive risk identification
  • Clear, timely, and effective communication

About the Company – Bits In Glass

  • 20+ years of industry experience
  • Merged with Crochet Technologies in 2021 to form a larger global organization
  • Offices in Pune, Hyderabad, and Chandigarh
  • Top 30 global Pega partner and sponsor of PegaWorld
  • Elite Appian Partner since 2008
  • Operations across US, Canada, UK, and India
  • Dedicated Global Pega Center of Excellence

Employee Benefits

  • Career Growth: Clear advancement paths and learning opportunities
  • Challenging Projects: Global, cutting-edge client work
  • Global Exposure: Collaboration with international teams
  • Flexible Work Arrangements: Work-life balance support
  • Comprehensive Benefits: Competitive compensation, health insurance, paid time off
  • Learning & Upskilling: AI-enabled Pega solutions, data engineering, integrations, cloud migration

Company Culture & Values

  • Collaborative & Inclusive: Teamwork, innovation, and respect for diverse ideas
  • Continuous Learning: Certifications and skill development encouraged
  • Integrity: Ethical and transparent practices
  • Excellence: High standards in delivery
  • Client-Centricity: Tailored solutions with measurable impact


Read more
Highfly Sourcing

at Highfly Sourcing

2 candid answers
Highfly Hr
Posted by Highfly Hr
Singapore, Switzerland, New Zealand, Dubai, Dublin, Ireland, Augsburg, Germany, Manchester (United Kingdom), Qatar, Kuwait, Malaysia, Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Goa
3 - 5 yrs
₹15L - ₹25L / yr
SQL
skill iconPHP
skill iconPython
Data Visualization
Data Structures
+5 more

We are seeking a motivated Data Analyst to support business operations by analyzing data, preparing reports, and delivering meaningful insights. The ideal candidate should be comfortable working with data, identifying patterns, and presenting findings in a clear and actionable way.

Key Responsibilities:

  • Collect, clean, and organize data from internal and external sources
  • Analyze large datasets to identify trends, patterns, and opportunities
  • Prepare regular and ad-hoc reports for business stakeholders
  • Create dashboards and visualizations using tools like Power BI or Tableau
  • Work closely with cross-functional teams to understand data requirements
  • Ensure data accuracy, consistency, and quality across reports
  • Document data processes and analysis methods


Read more
Hashone Careers
Madhavan I
Posted by Madhavan I
Hyderabad
6 - 10 yrs
₹15L - ₹28L / yr
skill iconData Analytics
skill iconPython
SQL
Data Warehouse (DWH)
Data modeling

Job Description

Role: Data Analyst

Experience: 6 - 9 Years

Location: Hyderabad

WorkMode: Work from Office (5 Days)


Overview

We are seeking a highly skilled Data Analyst with 6+ years of experience in analytics, data modeling, and advanced SQL. The ideal candidate has strong expertise in building scalable data models using dbt, writing efficient Python scripts, and delivering high-quality insights that support data-driven decision-making.


Key Responsibilities

Design, develop, and maintain data models using dbt (Core and dbt Cloud).

Build and optimize complex SQL queries to support reporting, analytics, and data pipelines.

Write Python scripts for data transformation, automation, and analytics workflows.

Ensure data quality, integrity, and consistency across multiple data sources.

Collaborate with cross-functional teams (Engineering, Product, Business) to understand data needs.

Develop dashboards and reports to visualize insights (using tools such as Tableau, Looker, or Power BI).

Perform deep-dive exploratory analysis to identify trends, patterns, and business opportunities.

Document data models, pipelines, and processes.

Contribute to scaling the analytics stack and improving data architecture.


Required Qualifications

6 - 9 years of hands-on experience in data analytics or data engineering.

Expert-level skills in SQL (complex joins, window functions, performance tuning).

Strong experience building and maintaining dbt data models.

Proficiency in Python for data manipulation, scripting, and automation.

Solid understanding of data warehousing concepts (e.g., dimensional modeling, ELT/ETL pipelines).

Understanding with cloud data platforms (Snowflake, BigQuery, Redshift, etc.).

Strong analytical thinking and problem-solving skills.

Excellent communication skills with the ability to present insights to stakeholders.

Trino and lakehouse architecture experience good to have


Read more
Financial Services Industry

Financial Services Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
4 - 5 yrs
₹10L - ₹20L / yr
skill iconPython
CI/CD
SQL
skill iconKubernetes
Stakeholder management
+14 more

Required Skills: CI/CD Pipeline, Kubernetes, SQL Database, Excellent Communication & Stakeholder Management, Python

 

Criteria:

Looking for 15days and max 30 days of notice period candidates.

looking candidates from Hyderabad location only

Looking candidates from EPAM company only 

1.4+ years of software development experience

2. Strong experience with Kubernetes, Docker, and CI/CD pipelines in cloud-native environments.

3. Hands-on with NATS for event-driven architecture and streaming.

4. Skilled in microservices, RESTful APIs, and containerized app performance optimization.

5. Strong in problem-solving, team collaboration, clean code practices, and continuous learning.

6.  Proficient in Python (Flask) for building scalable applications and APIs.

7. Focus: Java, Python, Kubernetes, Cloud-native development

8. SQL database 

 

Description

Position Overview

We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes.


Key Responsibilities

  • Design, develop, and maintain scalable applications using Java and Spring Boot framework
  • Build robust web services and APIs using Python and Flask framework
  • Implement event-driven architectures using NATS messaging server
  • Deploy, manage, and optimize applications in Kubernetes environments
  • Develop microservices following best practices and design patterns
  • Collaborate with cross-functional teams to deliver high-quality software solutions
  • Write clean, maintainable code with comprehensive documentation
  • Participate in code reviews and contribute to technical architecture decisions
  • Troubleshoot and optimize application performance in containerized environments
  • Implement CI/CD pipelines and follow DevOps best practices
  •  

Required Qualifications

  • Bachelor's degree in Computer Science, Information Technology, or related field
  • 4+ years of experience in software development
  • Strong proficiency in Java with deep understanding of web technology stack
  • Hands-on experience developing applications with Spring Boot framework
  • Solid understanding of Python programming language with practical Flask framework experience
  • Working knowledge of NATS server for messaging and streaming data
  • Experience deploying and managing applications in Kubernetes
  • Understanding of microservices architecture and RESTful API design
  • Familiarity with containerization technologies (Docker)
  • Experience with version control systems (Git)


Skills & Competencies

  • Skills Java (Spring Boot, Spring Cloud, Spring Security) 
  • Python (Flask, SQL Alchemy, REST APIs)
  • NATS messaging patterns (pub/sub, request/reply, queue groups)
  • Kubernetes (deployments, services, ingress, ConfigMaps, Secrets)
  • Web technologies (HTTP, REST, WebSocket, gRPC)
  • Container orchestration and management
  • Soft Skills Problem-solving and analytical thinking
  • Strong communication and collaboration
  • Self-motivated with ability to work independently
  • Attention to detail and code quality
  • Continuous learning mindset
  • Team player with mentoring capabilities


Read more
Digital Convergence Technologies
Pune, Bengaluru (Bangalore), Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Hyderabad
5 - 8 yrs
₹40L - ₹45L / yr
Artificial Intelligence (AI)
Data-flow analysis
Microsoft SharePoint
API
skill iconPython

AI Agent Builder – Internal Functions and Data Platform Development Tools


About the Role:

We are seeking a forward-thinking AI Agent Builder to lead the design, development, and deployment, and usage reporting of Microsoft Copilot and other AI-powered agents across our data platform development tools and internal business functions. This role will be instrumental in driving automation, improving onboarding, and enhancing operational efficiency through intelligent, context-aware assistants.

This role is central to our GenAI transformation strategy. You will help shape the future of how our teams interact with data, reduce administrative burden, and unlock new efficiencies across the organization. Your work will directly contribute to our “Art of the Possible” initiative—demonstrating tangible business value through AI.

You Will:

•                 Copilot Agent Development: Use Microsoft Copilot Studio and Agent Builder to create, test, and deploy AI agents that automate workflows, answer queries, and support internal teams.

•                 Data Engineering Enablement: Build agents that assist with data connector scaffolding, pipeline generation, and onboarding support for engineers.

•                 Knowledge Base Integration: Curate and integrate documentation (e.g., ERDs, connector specs) into Copilot-accessible repositories (SharePoint, Confluence) to support contextual AI responses.

•                 Prompt Engineering: Design reusable prompt templates and conversational flows to streamline repeated tasks and improve agent usability.

•                 Tool Evaluation & Integration: Assess and integrate complementary AI tools (e.g., GitLab Duo, Databricks AI, Notebook LM) to extend Copilot capabilities.

•                 Cross-Functional Collaboration: Partner with product, delivery, PMO, and security teams to identify high-value use cases and scale successful agent implementations.

•                 Governance & Monitoring: Ensure agents align with Responsible AI principles, monitor performance, and iterate based on feedback and evolving business needs.

•                 Adoption and Usage Reporting: Use Microsoft Viva Insights and other tools to report on user adoption, usage and business value delivered.

What We're Looking For:

•                 Proven experience with Microsoft 365 Copilot, Copilot Studio, or similar AI platforms, ChatGPT, Claude, etc.

•                 Strong understanding of data engineering workflows, tools (e.g., Git, Databricks, Unity Catalog), and documentation practices.

•                 Familiarity with SharePoint, Confluence, and Microsoft Graph connectors.

•                 Experience in prompt engineering and conversational UX design.

•                 Ability to translate business needs into scalable AI solutions.

•                 Excellent communication and collaboration skills across technical and non-technical

Bonus Points:

•                 Experience with GitLab Duo, Notebook LM, or other AI developer tools.

•                 Background in enterprise data platforms, ETL pipelines, or internal business systems.

•                 Exposure to AI governance, security, and compliance frameworks.

•                 Prior work in a regulated industry (e.g., healthcare, finance) is a plus.

Read more
IT Services Industry

IT Services Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Hyderabad
10 - 12 yrs
₹20L - ₹35L / yr
Signal integrity
Systems analysis and design
Schematic
Hardware
EMI
+16 more

Required Skills: Advanced Hardware Board Design Expertise, Signal Integrity, EMI/EMC & Design Analysis, Board Bring-Up & Troubleshooting, EDA Tools & Technical Documentation, Cross-Functional & Supply Chain Coordination

 

Criteria:

  • Education: B.Tech / M.Tech in ECE / CSE / IT
  • Experience: 10–12 years in hardware board design, system hardware engineering, and full product deployment cycles
  • Proven expertise in digital, analog, and power electronic circuit analysis & design
  • Strong hands-on experience designing boards with SoCs, FPGAs, CPLDs, and MPSoC architectures
  • Deep understanding of signal integrity, EMI/EMC, and high-speed design considerations
  • Must have successfully completed at least two hardware product development cycles from high-level design to final deployment
  • Ability to independently handle schematic design, design analysis (DC drop, SI), and cross-team design reviews
  • Experience in sourcing & procurement of electronic components, PCBs, and mechanical parts for embedded/IoT/industrial hardware
  • Strong experience in board bring-up, debugging, issue investigation, and cross-functional triage with firmware/software teams
  • Expertise in hardware validation, test planning, test execution, equipment selection, debugging, and report preparation
  • Proficiency in Cadence Allegro or Altium EDA tools (mandatory)
  • Experience coordinating with layout, mechanical, SI, EMC, manufacturing, and supply chain teams
  • Strong understanding of manufacturing services, production pricing models, supply chain, and logistics for electronics/electromechanical components

 

Description

REQUIRED SKILLS:

• Extensive experience in hardware board designs and towards multiple product field deployment cycle.

• Strong foundation and expertise in analyzing digital, Analog and power electronic circuits.

• Proficient with SoC, FPGAs, CPLD and MPSOC architecture-based board designs.

• Knowledgeable in signal integrity, EMI/EMC concepts for digital and power electronics.

• Completed at least two project from high-level design to final product level deployment.

• Capable of independently managing product’s schematic, design analysis DC Drop, Singal Integrity, and coordinating reviews with peer of layout, mechanical, SI, and EMC teams.

• Sourcing and procurement of electronic components, PCBs, and mechanical parts for cutting-edge IoT, embedded, and industrial product development.

• Experienced in board bring-up, issue investigation, and triage in collaboration with firmware and software teams.

• Skilled in preparing hardware design documentation, validation test planning, identify necessary test equipment, test development, execution, debugging, and report preparation.

• Effective communication and interpersonal skills for collaborative work with cross-functional teams, including post-silicon bench validation, BIOS, and driver development/QA.

• Hands-on experience with Cadence Allegro/Altium EDA tools is essential.

• Familiarity with programming and scripting languages like Python and Perl, and experience in test automation is advantageous.

• Should have excellent exposure with coordination of Manufacturing Services, pricing model for production value supply chain & Logistics in electronics and electromechanical components domain.

 


Education Requirements: 

B. Tech / M. Tech (ECE/ CSE/ IT)

Experience - 10 to 12 Years


 

Read more
Deqode

at Deqode

1 recruiter
Samiksha Agrawal
Posted by Samiksha Agrawal
Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Mumbai, Pune, Bengaluru (Bangalore), Hyderabad, Jaipur, Bhopal
5 - 8 yrs
₹5L - ₹13L / yr
skill iconPython
Azure
Artificial Intelligence (AI)
FastAPI
skill iconFlask
+3 more

Job Description: Python-Azure AI Developer

Experience: 5+ years

Locations: Bangalore | Pune | Chennai | Jaipur | Hyderabad | Gurgaon | Bhopal

Mandatory Skills:

  • Python: Expert-level proficiency with FastAPI/Flask
  • Azure Services: Hands-on experience integrating Azure cloud services
  • Databases: PostgreSQL, Redis
  • AI Expertise: Exposure to Agentic AI technologies, frameworks, or SDKs with strong conceptual understanding

Good to Have:

  • Workflow automation tools (n8n or similar)
  • Experience with LangChain, AutoGen, or other AI agent frameworks
  • Azure OpenAI Service knowledge

Key Responsibilities:

  • Develop AI-powered applications using Python and Azure
  • Build RESTful APIs with FastAPI/Flask
  • Integrate Azure services for AI/ML workloads
  • Implement agentic AI solutions
  • Database optimization and management
  • Workflow automation implementation


Read more
Global digital transformation solutions provider.

Global digital transformation solutions provider.

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore), Chennai, Kochi (Cochin), Trivandrum, Hyderabad, Thiruvananthapuram
8 - 10 yrs
₹10L - ₹25L / yr
Business Analysis
Data Visualization
PowerBI
SQL
Tableau
+18 more

Job Description – Senior Technical Business Analyst

Location: Trivandrum (Preferred) | Open to any location in India

Shift Timings - 8 hours window between the 7:30 PM IST - 4:30 AM IST

 

About the Role

We are seeking highly motivated and analytically strong Senior Technical Business Analysts who can work seamlessly with business and technology stakeholders to convert a one-line problem statement into a well-defined project or opportunity. This role is ideal for fresh graduates who have a strong foundation in data analytics, data engineering, data visualization, and data science, along with a strong drive to learn, collaborate, and grow in a dynamic, fast-paced environment.

As a Technical Business Analyst, you will be responsible for translating complex business challenges into actionable user stories, analytical models, and executable tasks in Jira. You will work across the entire data lifecycle—from understanding business context to delivering insights, solutions, and measurable outcomes.

 

Key Responsibilities

Business & Analytical Responsibilities

  • Partner with business teams to understand one-line problem statements and translate them into detailed business requirementsopportunities, and project scope.
  • Conduct exploratory data analysis (EDA) to uncover trends, patterns, and business insights.
  • Create documentation including Business Requirement Documents (BRDs)user storiesprocess flows, and analytical models.
  • Break down business needs into concise, actionable, and development-ready user stories in Jira.

Data & Technical Responsibilities

  • Collaborate with data engineering teams to design, review, and validate data pipelinesdata models, and ETL/ELT workflows.
  • Build dashboards, reports, and data visualizations using leading BI tools to communicate insights effectively.
  • Apply foundational data science concepts such as statistical analysispredictive modeling, and machine learning fundamentals.
  • Validate and ensure data quality, consistency, and accuracy across datasets and systems.

Collaboration & Execution

  • Work closely with product, engineering, BI, and operations teams to support the end-to-end delivery of analytical solutions.
  • Assist in development, testing, and rollout of data-driven solutions.
  • Present findings, insights, and recommendations clearly and confidently to both technical and non-technical stakeholders.

 

Required Skillsets

Core Technical Skills

  • 6+ years of Technical Business Analyst experience within an overall professional experience of 8+ years
  • Data Analytics: SQL, descriptive analytics, business problem framing.
  • Data Engineering (Foundational): Understanding of data warehousing, ETL/ELT processes, cloud data platforms (AWS/GCP/Azure preferred).
  • Data Visualization: Experience with Power BI, Tableau, or equivalent tools.
  • Data Science (Basic/Intermediate): Python/R, statistical methods, fundamentals of ML algorithms.

 

Soft Skills

  • Strong analytical thinking and structured problem-solving capability.
  • Ability to convert business problems into clear technical requirements.
  • Excellent communication, documentation, and presentation skills.
  • High curiosity, adaptability, and eagerness to learn new tools and techniques.

 

Educational Qualifications

  • BE/B.Tech or equivalent in:
  • Computer Science / IT
  • Data Science

 

What We Look For

  • Demonstrated passion for data and analytics through projects and certifications.
  • Strong commitment to continuous learning and innovation.
  • Ability to work both independently and in collaborative team environments.
  • Passion for solving business problems using data-driven approaches.
  • Proven ability (or aptitude) to convert a one-line business problem into a structured project or opportunity.

 

Why Join Us?

  • Exposure to modern data platforms, analytics tools, and AI technologies.
  • A culture that promotes innovation, ownership, and continuous learning.
  • Supportive environment to build a strong career in data and analytics.

 

Skills: Data Analytics, Business Analysis, Sql


Must-Haves

Technical Business Analyst (6+ years), SQL, Data Visualization (Power BI, Tableau), Data Engineering (ETL/ELT, cloud platforms), Python/R

Read more
Quanteon Solutions
DurgaPrasad Sannamuri
Posted by DurgaPrasad Sannamuri
Hyderabad
0 - 2 yrs
₹3L - ₹5L / yr
skill iconJava
skill iconPython
skill iconJavascript
Selenium
Playwright
+13 more

About the Role

We are looking for a strong, self-driven QA Engineer who can perform a hybrid role in the new testing paradigm — acting as both a Business Analyst (BA) and a Quality Assurance (QA) professional. The ideal candidate should be capable of understanding business needs under direction, translating them into clear requirements, and then validating them through effective QA practices.

This role requires someone who can leverage AI tools extensively to automate and optimize both requirements documentation and QA activities, reducing manual effort while improving speed and accuracy.

 

Key Responsibilities

Business Analysis Responsibilities

  • Work under direction to understand business problems, workflows, and client expectations
  • Elicit, analyze, and document business and functional requirements
  • Create and maintain BRDs, FRDs, user stories, acceptance criteria, and process flows
  • Collaborate with stakeholders, developers, and product teams to clarify requirements
  • Use AI tools to assist with requirement generation, refinement, documentation, and validation

Quality Assurance Responsibilities

  • Design, develop, and execute manual and automated test cases based on documented requirements
  • Perform functional, regression, smoke, sanity, and UAT testing
  • Ensure traceability between requirements and test cases
  • Identify, log, track, and retest defects using defect tracking tools
  • Collaborate closely with development teams to ensure quality delivery
  • Use AI-powered QA tools to automate test case creation, execution, and maintenance

AI & Automation Focus

  • Use AI tools to:
  • Generate and refine requirements and user stories
  • Auto-create test cases from requirements
  • Optimize regression test suites
  • Perform test data generation and defect analysis
  • Continuously identify areas where AI can reduce manual effort and improve efficiency
  • Ensure quality, accuracy, and business alignment of AI-generated outputs

 

Required Skills & Qualifications

  • 1–3 years of experience in QA / Software Testing, with exposure to Business Analysis activities
  • Strong understanding of SDLC, STLC, and Agile methodologies
  • Proven ability to understand requirements and translate them into effective test scenarios
  • Experience with QA Automation tools (Selenium, Cypress, Playwright, or similar)
  • Hands-on experience using AI tools for QA and documentation (AI test generators, AI copilots, testRigor, Gen AI tools, etc.)
  • Good knowledge of test case design techniques and requirement traceability
  • Basic to intermediate knowledge of programming/scripting languages (Java, JavaScript, or Python)
  • Experience with API testing (Postman or similar tools)
  • Familiarity with JIRA, Confluence, or similar tools
  • Strong analytical, problem-solving, and documentation skills
  • Ability to take direction, work independently, and deliver with minimal supervision

Educational Qualifications

  • B.Tech / B.E in IT, CSE, AI/ML, ECE
  • M.Tech / M.E in IT, CSE, AI/ML, ECE
  • Strong academic foundation in programming, software engineering, or testing concepts is preferred
  • Certifications in Software Testing, Automation, or AI tools (optional but an added advantage)
Read more
Hashone Careers
Bengaluru (Bangalore), Pune, Hyderabad
5 - 10 yrs
₹12L - ₹25L / yr
DevOps
skill iconPython
cicd
skill iconKubernetes
skill iconDocker
+1 more

Job Description

Experience: 5 - 9 years

Location: Bangalore/Pune/Hyderabad

Work Mode: Hybrid(3 Days WFO)


Senior Cloud Infrastructure Engineer for Data Platform 


The ideal candidate will play a critical role in designing, implementing, and maintaining cloud infrastructure and CI/CD pipelines to support scalable, secure, and efficient data and analytics solutions. This role requires a strong understanding of cloud-native technologies, DevOps best practices, and hands-on experience with Azure and Databricks.


Key Responsibilities:


Cloud Infrastructure Design & Management

Architect, deploy, and manage scalable and secure cloud infrastructure on Microsoft Azure.

Implement best practices for Azure Resource Management, including resource groups, virtual networks, and storage accounts.

Optimize cloud costs and ensure high availability and disaster recovery for critical systems


Databricks Platform Management

Set up, configure, and maintain Databricks workspaces for data engineering, machine learning, and analytics workloads.

Automate cluster management, job scheduling, and monitoring within Databricks.

Collaborate with data teams to optimize Databricks performance and ensure seamless integration with Azure services.


CI/CD Pipeline Development

Design and implement CI/CD pipelines for deploying infrastructure, applications, and data workflows using tools like Azure DevOps, GitHub Actions, or similar.

Automate testing, deployment, and monitoring processes to ensure rapid and reliable delivery of updates.


Monitoring & Incident Management

Implement monitoring and alerting solutions using tools like Dynatrace, Azure Monitor, Log Analytics, and Databricks metrics.

Troubleshoot and resolve infrastructure and application issues, ensuring minimal downtime.


Security & Compliance

Enforce security best practices, including identity and access management (IAM), encryption, and network security.

Ensure compliance with organizational and regulatory standards for data protection and cloud operations.


Collaboration & Documentation

Work closely with cross-functional teams, including data engineers, software developers, and business stakeholders, to align infrastructure with business needs.

Maintain comprehensive documentation for infrastructure, processes, and configurations.


Required Qualifications

Education: Bachelor’s degree in Computer Science, Engineering, or a related field.


Must Have Experience:

6+ years of experience in DevOps or Cloud Engineering roles.

Proven expertise in Microsoft Azure services, including Azure Data Lake, Azure Databricks, Azure Data Factory (ADF), Azure Functions, Azure Kubernetes Service (AKS), and Azure Active Directory.

Hands-on experience with Databricks for data engineering and analytics.


Technical Skills:

Proficiency in Infrastructure as Code (IaC) tools like Terraform, ARM templates, or Bicep.

Strong scripting skills in Python, or Bash.

Experience with containerization and orchestration tools like Docker and Kubernetes.

Familiarity with version control systems (e.g., Git) and CI/CD tools (e.g., Azure DevOps, GitHub Actions).


Soft Skills:

Strong problem-solving and analytical skills.

Excellent communication and collaboration abilities.

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 12 yrs
₹20L - ₹46L / yr
skill iconData Science
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Generative AI
skill iconDeep Learning
+14 more

Review Criteria

  • Strong Senior Data Scientist (AI/ML/GenAI) Profile
  • 5+ years of experience in designing, developing, and deploying Machine Learning / Deep Learning (ML/DL) systems in production
  • Must have strong hands-on experience in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX.
  • 1+ years of experience in fine-tuning Large Language Models (LLMs) using techniques like LoRA/QLoRA, and building RAG (Retrieval-Augmented Generation) pipelines.
  • Must have experience with MLOps and production-grade systems including Docker, Kubernetes, Spark, model registries, and CI/CD workflows

 

Preferred

  • Prior experience in open-source GenAI contributions, applied LLM/GenAI research, or large-scale production AI systems
  • Preferred (Education) – B.S./M.S./Ph.D. in Computer Science, Data Science, Machine Learning, or a related field.

 

Job Specific Criteria

  • CV Attachment is mandatory
  • Which is your preferred job location (Mumbai / Bengaluru / Hyderabad / Gurgaon)?
  • Are you okay with 3 Days WFO?
  • Virtual Interview requires video to be on, are you okay with it?


Role & Responsibilities

Company is hiring a Senior Data Scientist with strong expertise in AI, machine learning engineering (MLE), and generative AI. You will play a leading role in designing, deploying, and scaling production-grade ML systems — including large language model (LLM)-based pipelines, AI copilots, and agentic workflows. This role is ideal for someone who thrives on balancing cutting-edge research with production rigor and loves mentoring while building impact-first AI applications.

 

Responsibilities:

  • Own the full ML lifecycle: model design, training, evaluation, deployment
  • Design production-ready ML pipelines with CI/CD, testing, monitoring, and drift detection
  • Fine-tune LLMs and implement retrieval-augmented generation (RAG) pipelines
  • Build agentic workflows for reasoning, planning, and decision-making
  • Develop both real-time and batch inference systems using Docker, Kubernetes, and Spark
  • Leverage state-of-the-art architectures: transformers, diffusion models, RLHF, and multimodal pipelines
  • Collaborate with product and engineering teams to integrate AI models into business applications
  • Mentor junior team members and promote MLOps, scalable architecture, and responsible AI best practices


Ideal Candidate

  • 5+ years of experience in designing, deploying, and scaling ML/DL systems in production
  • Proficient in Python and deep learning frameworks such as PyTorch, TensorFlow, or JAX
  • Experience with LLM fine-tuning, LoRA/QLoRA, vector search (Weaviate/PGVector), and RAG pipelines
  • Familiarity with agent-based development (e.g., ReAct agents, function-calling, orchestration)
  • Solid understanding of MLOps: Docker, Kubernetes, Spark, model registries, and deployment workflows
  • Strong software engineering background with experience in testing, version control, and APIs
  • Proven ability to balance innovation with scalable deployment
  • B.S./M.S./Ph.D. in Computer Science, Data Science, or a related field
  • Bonus: Open-source contributions, GenAI research, or applied systems at scale


Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
5 - 10 yrs
₹10L - ₹40L / yr
skill iconPython
SQL
Google Cloud Platform (GCP)
Dataform

Responsibilities:

  • Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)
  • Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views
  • Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration
  • Implement SQL-based transformations using Dataform (or dbt)
  • Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture
  • Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability
  • Partner with solution architects and product teams to translate data requirements into technical designs
  • Mentor junior data engineers and support knowledge-sharing across the team
  • Contribute to documentation, code reviews, sprint planning, and agile ceremonies



Requirements


  • 5+ years of hands-on experience in data engineering, with at least 2 years on GCP
  • Proven expertise in BigQueryDataflow (Apache Beam)Cloud Composer (Airflow)
  • Strong programming skills in Python and/or Java
  • Experience with SQL optimizationdata modeling, and pipeline orchestration
  • Familiarity with GitCI/CD pipelines, and data quality monitoring frameworks
  • Exposure to Dataformdbt, or similar tools for ELT workflows
  • Solid understanding of data architectureschema design, and performance tuning
  • Excellent problem-solving and collaboration skills

Bonus Skills:

  • GCP Professional Data Engineer certification
  • Experience with Vertex AICloud FunctionsDataproc, or real-time streaming architectures
  • Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)
  • Exposure to Docker/KubernetesAPI integration, and infrastructure-as-code (Terraform)


Read more
Versatile Commerce LLP

at Versatile Commerce LLP

2 candid answers
Burugupally Shailaja
Posted by Burugupally Shailaja
Hyderabad
3 - 6 yrs
₹4L - ₹6L / yr
Selenium
skill iconJava
skill iconPython
skill iconJenkins
TestNG
+6 more

We’re Hiring – Automation Test Engineer!

We at Versatile Commerce are looking for passionate Automation Testing Professionals to join our growing team!

📍 Location: Gachibowli, Hyderabad (Work from Office)

💼 Experience: 3 – 5 Years

Notice Period: Immediate Joiners Preferred

What we’re looking for:

✅ Strong experience in Selenium / Cypress / Playwright

✅ Proficient in Java / Python / JavaScript

✅ Hands-on with TestNG / JUnit / Maven / Jenkins

✅ Experience in API Automation (Postman / REST Assured)

✅ Good understanding of Agile Testing & Defect Management Tools (JIRA, Zephyr)

Read more
CADFEM India
Agency job
via hirezyai by Aardra Suresh
Hyderabad
4 - 8 yrs
₹12L - ₹15L / yr
skill iconPython
skill iconReact.js
TypeScript
skill iconPostgreSQL
skill iconAngular (2+)
+2 more

Role Summary

We are seeking a Full-Stack Developer to build and secure features for our Therapy Planning Software (TPS), which integrates with RMS/RIS, EMR systems, devices (DICOM, Bluetooth, VR, robotics, FES), and supports ICD–ICF–ICHI coding. The role involves ~40% frontend and 60% backend development, with end-to-end responsibility for security across application layers.

Responsibilities

Frontend (40%)

  1. Build responsive, accessible UI in React + TypeScript (or Angular/Vue).
  2. Implement multilingual (i18n/l10n) and WCAG 2.1 accessibility standards.
  3. Develop offline-capable PWAs for home programs.
  4. Integrate REST/FHIR APIs for patient workflows, scheduling, and reporting.
  5. Support features like voice-to-text, video capture, and compression.

Backend (60%)

  1. Design and scale REST APIs using Python (FastAPI/Django).
  2. Build modules for EMR storage, assessments, therapy plans, and data logging.
  3. Implement HL7/FHIR endpoints and secure integrations with external EMRs.
  4. Handle file uploads (virus scanning, HD video compression, secure storage).
  5. Optimize PostgreSQL schemas and queries for performance.
  6. Implement RBAC, MFA, PDPA compliance, edit locks, and audit trails.

Security Layer (Ownership)

  1. Identity & Access: OAuth2/OIDC, JWT, MFA, SSO.
  2. Data Protection: TLS, AES-256 at rest, field-level encryption, immutable audit logs.
  3. Compliance: PDPA, HIPAA principles, MDA requirements.
  4. DevSecOps: Secure coding (OWASP ASVS), dependency scanning, secrets management.
  5. Monitoring: Logging/metrics (ELK/Prometheus), anomaly detection, DR/BCP preparedness.

Requirements

  • Strong skills in Python (FastAPI/Django) and React + TypeScript.
  • Experience with HL7/FHIR, EMR data, and REST APIs.
  • Knowledge of OAuth2/JWT authentication, RBAC, audit logging.
  • Proficiency with PostgreSQL and database optimization.
  • Cloud deployment (AWS/Azure) and containerization (Docker/K8s) a plus.

Added Advantage

  • Familiarity with ICD, ICF, ICHI coding systems or medical diagnosis workflows.

Success Metrics

  • Deliver secure end-to-end features with clinical workflow integration.
  • Pass OWASP/ASVS L2 security baseline.
  • Establish full audit trail and role-based access across at least one clinical workflow.


Read more
Loyalty Juggernaut Inc

at Loyalty Juggernaut Inc

2 recruiters
Shraddha Dhavle
Posted by Shraddha Dhavle
Hyderabad
3 - 5 yrs
₹5L - ₹15L / yr
ETL
ETL architecture
skill iconPython
Data engineering

At Loyalty Juggernaut, we’re on a mission to revolutionize customer loyalty through AI-driven SaaS solutions. We are THE JUGGERNAUTS, driving innovation and impact in the loyalty ecosystem with GRAVTY®, our SaaS Product that empowers multinational enterprises to build deeper customer connections. Designed for scalability and personalization, GRAVTY® delivers cutting-edge loyalty solutions that transform customer engagement across diverse industries including Airlines, Airport, Retail, Hospitality, Banking, F&B, Telecom, Insurance and Ecosystem.


Our Impact:

  • 400+ million members connected through our platform.
  • Trusted by 100+ global brands/partners, driving loyalty and brand devotion worldwide.


Proud to be a Three-Time Champion for Best Technology Innovation in Loyalty!!


Explore more about us at www.lji.io.


What you will OWN:

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from various sources using SQL and AWS ‘big data’ technologies.
  • Create and maintain optimal data pipeline architecture.
  • Identify, design, and implement internal process improvements, automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with stakeholders, including the Technical Architects, Developers, Product Owners, and Executives, to assist with data-related technical issues and support their data infrastructure needs.
  • Create tools for data management and data analytics that can assist them in building and optimizing our product to become an innovative industry leader.


You would make a GREAT FIT if you have:

  • Have 2 to 5 years of relevant backend development experience, with solid expertise in Python.
  • Possess strong skills in Data Structures and Algorithms, and can write optimized, maintainable code.
  • Are familiar with database systems, and can comfortably work with PostgreSQL, as well as NoSQL solutions like MongoDB or DynamoDB.
  • Hands-on experience using Cloud Dataware houses like AWS Redshift, GBQ, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift, and AWS Batch would be an added advantage.
  • Have a solid understanding of ETL processes and tools and can build or modify ETL pipelines effectively.
  • Have experience managing or building data pipelines and architectures at scale.
  • Understand the nuances of data ingestion, transformation, storage, and analytics workflows.
  • Communicate clearly and work collaboratively across engineering, product.


Why Choose US?

  • This opportunity offers a dynamic and supportive work environment where you'll have the chance to not just collaborate with talented technocrats but also work with globally recognized brands, gain exposure, and carve your own career path.
  • You will get to innovate and dabble in the future of technology -Enterprise Cloud Computing, Blockchain, Machine Learning, AI, Mobile, Digital Wallets, and much more.


Read more
Versatile Commerce LLP

at Versatile Commerce LLP

2 candid answers
Burugupally Shailaja
Posted by Burugupally Shailaja
Hyderabad
3 - 9 yrs
₹3L - ₹8L / yr
Retrieval Augmented Generation (RAG)
skill iconMachine Learning (ML)
Generative AI
Open-source LLMs
skill iconPython
+2 more

📍Company: Versatile Commerce

 📍 Position: Data Scientists

 📍 Experience: 3-9 yrs

 📍 Location: Hyderabad (WFO)

 📅 Notice Period: 0- 15 Days

Read more
Pipaltree AI

at Pipaltree AI

2 candid answers
Mudit Tanwani
Posted by Mudit Tanwani
Remote, Hyderabad
3 - 7 yrs
₹24L - ₹60L / yr
Artificial Intelligence (AI)
skill iconPython
LLMs

At Pipaltree, we’re building an AI-enabled platform that helps brands understand how they’re truly perceived — not through surveys or static dashboards, but through real conversations happening across the world.

We’re a small team solving deep technical and product challenges: orchestrating large-scale conversation data, applying reasoning and summarization models, and turning this into insights that businesses can trust.


Requirements:

  • Deep understanding of distributed systems and asynchronous programming in Python
  • Experience with building scalable applications using LLMs or traditional ML techniques
  • Experience with Databases, Cache, and Micro services
  • Experience with DevOps is a huge plus
Read more
Talent Pro
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
5 - 8 yrs
₹25L - ₹35L / yr
skill iconPython
skill iconReact.js

Strong Full stack developer Profile

Mandatory (Experience 1) - Must Have Minimum 5+ YOE in Software Development,

Mandatory (Experience 2) - Must have 4+ YOE in backend using Python.

Mandatory (Experience 3) - Must have good experience in frontend using React JS with knowledge of HTML, CSS, and JavaScript.

Mandatory (Experience 4) - Must have Experience in any databases - MySQL / PostgreSQL / Postgres / Oracle / SQL Server /

Read more
One of the reputed Client in India

One of the reputed Client in India

Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Hyderabad, Pune
6 - 8 yrs
₹12L - ₹13L / yr
skill iconAmazon Web Services (AWS)
skill iconPython
PySpark

Our Client is looking to hire Databricks Amin immediatly.


This is PAN-INDIA Bulk hiring


Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.

Must have AWS


Notice 15-30 days is preferred.


Share profiles at hr at etpspl dot com

Please refer/share our email to your friends/colleagues who are looking for job.

Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Chennai
7 - 10 yrs
₹10L - ₹18L / yr
full stack
skill iconReact.js
skill iconPython
skill iconGo Programming (Golang)
CI/CD
+9 more

Full-Stack Developer

Exp: 5+ years required

Night shift: 8 PM-5 AM/9PM-6 AM

Only Immediate Joinee Can Apply


We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.

Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.

Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.


Read more
Hyderabad
6 - 10 yrs
₹20L - ₹30L / yr
skill iconJava
skill iconPython
skill iconHTML/CSS
skill iconJavascript
skill iconSpring Boot
+4 more

Senior Software Developer – Java Full Stack | AI-Powered Innovation

Experience: 6–10 years

Department: Engineering & Innovation


🌟 About the Role

We’re searching for a Senior Software Developer who thrives on solving complex challenges and building world-class products that redefine technology boundaries. You’ll be part of a dynamic team that brings Java full-stack excellence together with Python and AI-driven innovations, crafting scalable, intelligent, and high-performance solutions.

If you love clean code, intelligent systems, and pushing the limits of what’s possible, this is your playground.


💡 What You’ll Do

  • Design, develop, and deploy robust Java-based full-stack applications with a focus on performance, scalability, and reliability.
  • Collaborate with cross-functional teams to integrate AI and Python-driven components into enterprise-grade systems.
  • Architect and maintain microservices, RESTful APIs, and modular components for high-availability platforms.
  • Engage in end-to-end product development — from ideation to deployment — using modern frameworks and tools.
  • Champion best coding practices, conduct code reviews, and mentor junior engineers.
  • Explore, experiment, and implement new technologies in AI, automation, and intelligent analytics.
  • Troubleshoot complex issues, debug performance bottlenecks, and deliver elegant solutions.


🧠 What Makes You Stand Out

  • Strong expertise in Java, Spring Boot, Hibernate, and modern JavaScript frameworks (React, Angular, or Vue).
  • Hands-on exposure to Python programming — especially for automation or AI/ML integration.
  • Solid understanding of AI/ML frameworks (TensorFlow, PyTorch, or OpenAI APIs) is a big plus.
  • Experience with cloud technologies (AWS, Azure, or GCP) and containerization tools (Docker, Kubernetes).
  • Proven record of building scalable microservices and RESTful APIs.
  • Passion for problem-solving, algorithmic thinking, and clean architecture.
  • Excellent communication and collaboration skills — you turn complex problems into creative solutions.


⚙️ Tech Stack Snapshot

Languages: Java, Python, JavaScript

Frameworks: Spring Boot, React/Angular/Vue, Flask (optional)

Tools: Git, Jenkins, Docker, Kubernetes

Databases: MongoDB, MySQL, PostgreSQL

Bonus: AI/ML frameworks, Generative AI, or NLP experience


🌈 Why You’ll Love Working Here

  • Work on cutting-edge AI-integrated applications that make a real-world impact.
  • Join a culture that values innovation, autonomy, and technical excellence.
  • Collaborate with brilliant minds who inspire and challenge you daily.
  • Enjoy flexibility, learning opportunities, and a growth-oriented environment.


💬 Ready to code the future?

Apply now and let’s build something extraordinary together!

Read more
Hunarstreet technologies pvt ltd

Hunarstreet technologies pvt ltd

Agency job
Chennai, Hyderabad, Bengaluru (Bangalore), Mumbai, Pune, Gurugram, Mohali, Panchkula
5 - 15 yrs
₹10L - ₹15L / yr
Fullstack Developer
Web Development
skill iconJavascript
TypeScript
skill iconGo Programming (Golang)
+5 more

We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.


Key Responsibilities

● Develop and maintain user-facing features using React.js and TypeScript.

● Write clean, efficient, and well-documented JavaScript/TypeScript code.

● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.

● Contribute to the design, implementation, and maintenance of our databases.

● Collaborate with senior developers and product managers to deliver high-quality software.

● Troubleshoot and debug issues across the full stack.

● Participate in code reviews to maintain code quality and share knowledge.


Qualifications

● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.

● 5+ years of professional experience in web development.

● Proficiency in JavaScript and/or TypeScript.

● Proficiency in Golang and Python.

● Hands-on experience with the React.js library for building user interfaces.

● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).

● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).

● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.

● Strong problem-solving skills and a willingness to learn.

● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.

● Knowledge of CI/CD pipelines and automated testing.

Read more
Estuate Software

at Estuate Software

1 candid answer
Deekshith K Naidu
Posted by Deekshith K Naidu
Hyderabad
5 - 12 yrs
₹5L - ₹35L / yr
Google Cloud Platform (GCP)
Apache Airflow
ETL
skill iconPython
Big query
+1 more

Job Title: Data Engineer / Integration Engineer

 

Job Summary:

We are seeking a highly skilled Data Engineer / Integration Engineer to join our team. The ideal candidate will have expertise in Python, workflow orchestration, cloud platforms (GCP/Google BigQuery), big data frameworks (Apache Spark or similar), API integration, and Oracle EBS. The role involves designing, developing, and maintaining scalable data pipelines, integrating various systems, and ensuring data quality and consistency across platforms. Knowledge of Ascend.io is a plus.

Key Responsibilities:

  • Design, build, and maintain scalable data pipelines and workflows.
  • Develop and optimize ETL/ELT processes using Python and workflow automation tools.
  • Implement and manage data integration between various systems, including APIs and Oracle EBS.
  • Work with Google Cloud Platform (GCP) or Google BigQuery (GBQ) for data storage, processing, and analytics.
  • Utilize Apache Spark or similar big data frameworks for efficient data processing.
  • Develop robust API integrations for seamless data exchange between applications.
  • Ensure data accuracy, consistency, and security across all systems.
  • Monitor and troubleshoot data pipelines, identifying and resolving performance issues.
  • Collaborate with data analysts, engineers, and business teams to align data solutions with business goals.
  • Document data workflows, processes, and best practices for future reference.

Required Skills & Qualifications:

  • Strong proficiency in Python for data engineering and workflow automation.
  • Experience with workflow orchestration tools (e.g., Apache Airflow, Prefect, or similar).
  • Hands-on experience with Google Cloud Platform (GCP) or Google BigQuery (GBQ).
  • Expertise in big data processing frameworks, such as Apache Spark.
  • Experience with API integrations (REST, SOAP, GraphQL) and handling structured/unstructured data.
  • Strong problem-solving skills and ability to optimize data pipelines for performance.
  • Experience working in an agile environment with CI/CD processes.
  • Strong communication and collaboration skills.

Preferred Skills & Nice-to-Have:

  • Experience with Ascend.io platform for data pipeline automation.
  • Knowledge of SQL and NoSQL databases.
  • Familiarity with Docker and Kubernetes for containerized workloads.
  • Exposure to machine learning workflows is a plus.

Why Join Us?

  • Opportunity to work on cutting-edge data engineering projects.
  • Collaborative and dynamic work environment.
  • Competitive compensation and benefits.
  • Professional growth opportunities with exposure to the latest technologies.

How to Apply:

Interested candidates can apply by sending their resume to [your email/contact].

 

Read more
VyTCDC
Gobinath Sundaram
Posted by Gobinath Sundaram
Bengaluru (Bangalore), Pune, Hyderabad
6 - 12 yrs
₹5L - ₹28L / yr
skill iconData Science
skill iconPython
Large Language Models (LLM)

Job Description:

 

Role: Data Scientist

 

Responsibilities:

 

 Lead data science and machine learning projects, contributing to model development, optimization and evaluation. 

 Perform data cleaning, feature engineering, and exploratory data analysis.  

 

Translate business requirements into technical solutions, document and communicate project progress, manage non-technical stakeholders.

 

Collaborate with other DS and engineers to deliver projects.

 

Technical Skills – Must have:

 

Experience in and understanding of the natural language processing (NLP) and large language model (LLM) landscape.

 

Proficiency with Python for data analysis, supervised & unsupervised learning ML tasks.

 

Ability to translate complex machine learning problem statements into specific deliverables and requirements.

 

Should have worked with major cloud platforms such as AWS, Azure or GCP.

 

Working knowledge of SQL and no-SQL databases.

 

Ability to create data and ML pipelines for more efficient and repeatable data science projects using MLOps principles.

 

Keep abreast with new tools, algorithms and techniques in machine learning and works to implement them in the organization.

 

Strong understanding of evaluation and monitoring metrics for machine learning projects.

Read more
Inncircles
Sharat Chandra Manchi Sarapu
Posted by Sharat Chandra Manchi Sarapu
Hyderabad
2 - 6 yrs
₹8L - ₹20L / yr
skill iconPython
skill iconFlask
FastAPI
skill iconDjango
Databases
+2 more

About Us:

We are a cutting-edge startup reshaping the construction management landscape with AI-driven solutions that simplify complex processes and maximize efficiency. Our platform leverages the latest web and mobile technologies to solve real-world challenges in the construction industry, blending innovation with usability. If you're passionate about building scalable systems and love solving problems, we want you on board!

Who You Are:

You are a tech enthusiast with a passion for clean, scalable backend systems built in Python. You have a knack for solving challenging problems and enjoy working in a fast-paced startup environment. You’re comfortable diving into code, debugging complex issues, and collaborating with cross-functional teams. While deep expertise in Python frameworks is a must, you’re also excited about emerging technologies like generative AI, machine learning, and deep learning.

What You’ll Do:

  • Develop & Maintain: Build robust, secure, and scalable backend services using Python frameworks like Flask, FastAPI, or Django.
  • API Design: Create and maintain RESTful APIs and microservices that power our platform.
  • Database Management: Design and optimize database schemas; ideally with MongoDB, though experience with other databases is also valued.
  • Integration: Collaborate with front-end and mobile teams to integrate seamless data flows and user experiences.
  • Innovate: Explore and integrate new technologies, including LLMs, generative AI, machine learning, and deep learning, to enhance our product offerings.
  • Cloud & DevOps: Work with cloud computing platforms (AWS or similar) to deploy, scale, and maintain backend systems.

Tech Stack:

  • Backend: Python (Flask, FastAPI, or Django)
  • Database: MongoDB (preferred) or other relational/NoSQL databases
  • Cloud: AWS or other cloud platforms
  • Additional Tools: Git, Docker, CI/CD pipelines

What You Bring:

  • Experience: 2+ years of experience building scalable backend systems in Python.
  • Framework Proficiency: Solid hands-on experience with Flask, FastAPI, or Django.
  • Database Knowledge: Strong understanding of database design, indexing, and query optimization, preferably with MongoDB.
  • API Expertise: Experience designing and consuming RESTful APIs.
  • Version Control: Proficiency with Git and agile development practices.
  • Problem Solver: A keen eye for detail and a passion for writing clean, maintainable code.

Bonus Points For:

  • Exposure to and working experience with LLMs, generative AI, machine learning, deep learning, or fine-tuning models.
  • Familiarity with containerization (Docker) and modern CI/CD practices.
  • Experience working in a fast-paced startup environment.

Why Work With Us:

  • Impact: Join a mission-driven startup solving real-world problems in a trillion-dollar industry.
  • Innovation: Be part of a forward-thinking team that builds AI-powered, scalable tools from the ground up.
  • Growth: Enjoy rapid career advancement as our company scales, with ample space for your ideas to thrive.
  • Culture: Experience a collaborative, tech-driven, and fun work environment that values creativity, ownership, and continuous learning.


Read more
Inncircles
Gangadhar M
Posted by Gangadhar M
Hyderabad
4 - 8 yrs
Best in industry
NumPy
skill iconPython
pandas
skill iconMachine Learning (ML)
skill iconDeep Learning
+6 more

Job Title: Senior AI/ML/DL Engineer

Location: Hyderabad

Department: Artificial Intelligence/Machine Learning


Job Summary:

We are seeking a highly skilled and motivated Senior AI/ML/DL Engineer to contribute to

the development and implementation of advanced artificial intelligence, machine learning,

and deep learning solutions. The ideal candidate will have a strong technical background in

AI/ML/DL, hands-on experience in building scalable models, and a passion for solving

complex problems using data-driven approaches. This role involves working closely with

cross-functional teams to deliver innovative AI/ML solutions aligned with business objectives.

Key Responsibilities:


Technical Execution:

● Design, develop, and deploy AI/ML/DL models and algorithms to solve business

challenges.

● Stay up-to-date with the latest advancements in AI/ML/DL technologies and integrate

them into solutions.

● Implement best practices for model development, validation, and deployment.


Project Development:

● Collaborate with stakeholders to identify business opportunities and translate them

into AI/ML projects.

● Work on the end-to-end lifecycle of AI/ML projects, including data collection,

preprocessing, model training, evaluation, and deployment.

● Ensure the scalability, reliability, and performance of AI/ML solutions in production

environments.


Cross-Functional Collaboration:

● Work closely with product managers, software engineers, and domain experts to

integrate AI/ML capabilities into products and services.

● Communicate complex technical concepts to non-technical stakeholders effectively.


Research and Innovation:

Explore new AI/ML techniques and methodologies to enhance solution capabilities.


● Prototype and experiment with novel approaches to solve challenging problems.

●Contribute to internal knowledge-sharing initiatives and documentation.


Quality Assurance & MLOps:

● Ensure the accuracy, robustness, and ethical use of AI/ML models.

● Implement monitoring and maintenance processes for deployed models to ensure long-term performance.

● Follow MLOps practices for efficient deployment and monitoring of AI/ML solutions.


Qualifications:


Education:

● Bachelors/Master’s or Ph.D. in Computer Science, Data Science, Artificial Intelligence, Machine Learning, or a related field.


Experience:

● 5+ years of experience in AI/ML/DL, with a proven track record of delivering AI/ML solutions in production environments.

● Strong experience with programming languages such as Python, R, or Java.

● Proficiency in AI/ML frameworks and tools (e.g., TensorFlow, PyTorch, Scikit-learn,Keras).

● Experience with cloud platforms (e.g., AWS, Azure, GCP) and big data technologies

(e.g., Hadoop, Spark).

● Familiarity with MLOps practices and tools for model deployment and monitoring.


Skills:

● Strong understanding of machine learning algorithms, deep learning architectures,

and statistical modeling.

● Excellent problem-solving and analytical skills.

● Strong communication and interpersonal skills.

● Ability to manage multiple projects and prioritize effectively.


Preferred Qualifications:

● Experience in natural language processing (NLP), computer vision, or reinforcement

learning.

● Knowledge of ethical AI practices and regulatory compliance.

● Publications or contributions to the AI/ML community (e.g., research papers,open-source projects).


What We Offer:

● Competitive salary and benefits package.

● Opportunities for professional development and career growth.

● A collaborative and innovative work environment.

● The chance to work on impactful projects that leverage cutting-edge AI/ML technologies.

Read more
Inncircles
Gangadhar M
Posted by Gangadhar M
Hyderabad
3 - 5 yrs
Best in industry
PySpark
Spark
skill iconPython
ETL
Amazon EMR
+7 more


We are looking for a highly skilled Sr. Big Data Engineer with 3-5 years of experience in

building large-scale data pipelines, real-time streaming solutions, and batch/stream

processing systems. The ideal candidate should be proficient in Spark, Kafka, Python, and

AWS Big Data services, with hands-on experience in implementing CDC (Change Data

Capture) pipelines and integrating multiple data sources and sinks.


Responsibilities

  • Design, develop, and optimize batch and streaming data pipelines using Apache Spark and Python.
  • Build and maintain real-time data ingestion pipelines leveraging Kafka and AWS Kinesis.
  • Implement CDC (Change Data Capture) pipelines using Kafka Connect, Debezium or similar frameworks.
  • Integrate data from multiple sources and sinks (databases, APIs, message queues, file systems, cloud storage).
  • Work with AWS Big Data ecosystem: Glue, EMR, Kinesis, Athena, S3, Lambda, Step Functions.
  • Ensure pipeline scalability, reliability, and performance tuning of Spark jobs and EMR clusters.
  • Develop data transformation and ETL workflows in AWS Glue and manage schema evolution.
  • Collaborate with data scientists, analysts, and product teams to deliver reliable and high-quality data solutions.
  • Implement monitoring, logging, and alerting for critical data pipelines.
  • Follow best practices for data security, compliance, and cost optimization in cloud environments.


Required Skills & Experience

  • Programming: Strong proficiency in Python (PySpark, data frameworks, automation).
  • Big Data Processing: Hands-on experience with Apache Spark (batch & streaming).
  • Messaging & Streaming: Proficient in Kafka (brokers, topics, partitions, consumer groups) and AWS Kinesis.
  • CDC Pipelines: Experience with Debezium / Kafka Connect / custom CDC frameworks.
  • AWS Services: AWS Glue, EMR, S3, Athena, Lambda, IAM, CloudWatch.
  • ETL/ELT Workflows: Strong knowledge of data ingestion, transformation, partitioning, schema management.
  • Databases: Experience with relational databases (MySQL, Postgres, Oracle) and NoSQL (MongoDB, DynamoDB, Cassandra).
  • Data Formats: JSON, Parquet, Avro, ORC, Delta/Iceberg/Hudi.
  • Version Control & CI/CD: Git, GitHub/GitLab, Jenkins, or CodePipeline.
  • Monitoring/Logging: CloudWatch, Prometheus, ELK/Opensearch.
  • Containers & Orchestration (nice-to-have): Docker, Kubernetes, Airflow/Step
  • Functions for workflow orchestration.


Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
  • Experience in large-scale data lake / lake house architectures.
  • Knowledge of data warehousing concepts and query optimisation.
  • Familiarity with data governance, lineage, and cataloging tools (Glue Data Catalog, Apache Atlas).
  • Exposure to ML/AI data pipelines is a plus.


Tools & Technologies (must-have exposure)

  • Big Data & Processing: Apache Spark, PySpark, AWS EMR, AWS Glue
  • Streaming & Messaging: Apache Kafka, Kafka Connect, Debezium, AWS Kinesis
  • Cloud & Storage: AWS (S3, Athena, Lambda, IAM, CloudWatch)
  • Programming & Scripting: Python, SQL, Bash
  • Orchestration: Airflow / Step Functions
  • Version Control & CI/CD: Git, Jenkins/CodePipeline
  • Data Formats: Parquet, Avro, ORC, JSON, Delta, Iceberg, Hudi
Read more
Pune, Bengaluru (Bangalore), Hyderabad
8 - 12 yrs
₹14L - ₹15L / yr
skill iconR Programming
skill iconPython
Scikit-Learn
TensorFlow
PyTorch
+8 more

Role: Data Scientist (Python + R Expertise)

Exp: 8 -12 Years

CTC: up to 30 LPA


Required Skills & Qualifications:

  • 8–12 years of hands-on experience as a Data Scientist or in a similar analytical role.
  • Strong expertise in Python and R for data analysis, modeling, and visualization.
  • Proficiency in machine learning frameworks (scikit-learn, TensorFlow, PyTorch, caret, etc.).
  • Strong understanding of statistical modeling, hypothesis testing, regression, and classification techniques.
  • Experience with SQL and working with large-scale structured and unstructured data.
  • Familiarity with cloud platforms (AWS, Azure, or GCP) and deployment practices (Docker, MLflow).
  • Excellent analytical, problem-solving, and communication skills.


Preferred Skills:

  • Experience with NLP, time series forecasting, or deep learning projects.
  • Exposure to data visualization tools (Tableau, Power BI, or R Shiny).
  • Experience working in product or data-driven organizations.
  • Knowledge of MLOps and model lifecycle management is a plus.


If interested kindly share your updated resume on 82008 31681


Read more
FloData
Mahesh J
Posted by Mahesh J
Hyderabad
3 - 5 yrs
₹20L - ₹40L / yr
Generative AI
Retrieval Augmented Generation (RAG)
Prompt engineering
AI Agents
Langgraph
+5 more

Join us to reimagine how businesses integrate data and automate processes – with AI at the core.


About FloData

FloData is re-imagining the iPaaS and Business Process Automation (BPA) space for a new era - one where business teams, not just IT, can integrate data, run automations, and solve ops bottlenecks using intuitive, AI-driven interfaces. We're a small, hands-on team with a deep technical foundation and strong industry connections. Backed by real-world learnings from our earlier platform version, we're now going all-in on building a generative AI-first experience.


The Opportunity

We’re looking for an GenAI Engineer to help build the intelligence layer of our new platform. From designing LLM-powered orchestration flows with LangGraph to building frameworks for evaluation and monitoring with LangSmith, you’ll shape how AI powers real-world enterprise workflows.


If you thrive on working at the frontier of LLM systems engineering, enjoy scaling prototypes into production-grade systems, and want to make AI reliable, explainable, and enterprise-ready - this is your chance to define a category-defining product.


What You'll Do

  • Spend ~70% of your time architecting, prototyping, and productionizing AI systems (LLM orchestration, agents, evaluation, observability)
  • Develop AI frameworks: orchestration (LangGraph), evaluation/monitoring (LangSmith), vector/graph DBs, and other GenAI infra
  • Work with product engineers to seamlessly integrate AI services into frontend and backend workflows
  • Build systems for AI evaluation, monitoring, and reliability to ensure trustworthy performance at scale
  • Translate product needs into AI-first solutions, balancing rapid prototyping with enterprise-grade robustness
  • Stay ahead of the curve by exploring emerging GenAI frameworks, tools, and research for practical application


Must Have

  • 3–5 years of engineering experience, with at least 1-2 years in GenAI systems
  • Hands-on experience with LangGraph, LangSmith, LangChain, or similar frameworks for orchestration/evaluation
  • Deep understanding of LLM workflows: prompt engineering, fine-tuning, RAG, evaluation, monitoring, and observability
  • A strong product mindset—comfortable bridging research-level concepts with production-ready business use cases
  • Startup mindset: resourceful, pragmatic, and outcome-driven


Good To Have

  • Experience integrating AI pipelines with enterprise applications and hybrid infra setups (AWS, on-prem, VPCs)
  • Experience building AI-native user experiences (assistants, copilots, intelligent automation flows)
  • Familiarity with enterprise SaaS/IT ecosystems (Salesforce, Oracle ERP, Netsuite, etc.)


Why Join Us

  • Own the AI backbone of a generational product at the intersection of AI, automation, and enterprise data
  • Work closely with founders and leadership with no layers of bureaucracy
  • End-to-end ownership of AI systems you design and ship
  • Be a thought partner in setting AI-first principles for both tech and culture
  • Onsite in Hyderabad, with flexibility when needed


Sounds like you?

We'd love to talk. Apply now or reach out directly to explore this opportunity.

Read more
US Base Company

US Base Company

Agency job
Hyderabad, Gurugram
10 - 18 yrs
₹20L - ₹35L / yr
skill iconPython
skill iconDjango
skill iconReact.js
Angular
skill iconJavascript
+3 more

Key Responsibilities

  • Design, develop, and maintain scalable microservices and RESTful APIs using Python (Flask, FastAPI, or Django).
  • Architect data models for SQL and NoSQL databases (PostgreSQL, ClickHouse, MongoDB, DynamoDB) to optimize performance and reliability.
  • Implement efficient and secure data access layers, caching, and indexing strategies.
  • Collaborate closely with product and frontend teams to deliver seamless user experiences.
  • Build responsive UI components using HTML, CSS, JavaScript, and frameworks like React or Angular.
  • Ensure system reliability, observability, and fault tolerance across services.
  • Lead code reviews, mentor junior engineers, and promote engineering best practices.
  • Contribute to DevOps and CI/CD workflows for smooth deployments and testing automation.

Required Skills & Experience

  • 10+ years of professional software development experience.
  • Strong proficiency in Python, with deep understanding of OOP, asynchronous programming, and performance optimization.
  • Proven expertise in building FAST API based microservices architectures.
  • Solid understanding of SQL and NoSQL data modeling, query optimization, and schema design.
  • Excellent hands on proficiency in frontend proficiency with HTML, CSS, JavaScript, and a modern framework (React, Angular, or Vue).
  • Experience working with cloud platforms (AWS, GCP, or Azure) and containerized deployments (Docker, Kubernetes).
  • Familiarity with distributed systems, event-driven architectures, and messaging queues (Kafka, RabbitMQ).
  • Excellent problem-solving, communication, and system design skills.


Read more
Technoidentity
Hyderabad
6 - 12 yrs
₹20L - ₹35L / yr
skill iconPython
FastAPI
PySpark

Supercharge Your Career as a Technical Lead - Python at Technoidentity!

Are you ready to solve people challenges that fuel business growth? At Technoidentity, we’re a Data+AI product engineering company building cutting-edge solutions in the FinTech domain for over 13 years—and we’re expanding globally. It’s the perfect time to join our

team of tech innovators and leave your mark!

At Technoidentity, we’re a Data + AI product engineering company trusted to deliver scalable and modern enterprise solutions. Join us as a Senior Python Developer and Technical Lead, where you'll guide high-performing engineering teams, design complex systems, and deliver

clean, scalable backend solutions using Python and modern data technologies. Your leadership will directly shape the architecture and execution of enterprise projects, with added strength in understanding database logic including PL/SQL and PostgreSQL/AlloyDB.

What’s in it for You?

• Modern Python Stack – Python 3.x, FastAPI, Pandas, NumPy, SQLAlchemy, PostgreSQL/AlloyDB, PL/pgSQL.

• Tech Leadership – Drive technical decision-making, mentor developers, and ensure code quality and scalability.

• Scalable Projects – Architect and optimize data-intensive backend services for highthroughput and distributed systems.

• Engineering Best Practices – Enforce clean architecture, code reviews, testing strategies, and SDLC alignment.

• Cross-Functional Collaboration – Lead conversations across engineering, QA, product, and DevOps to ensure delivery excellence.

What Will You Be Doing?

Technical Leadership

• Lead a team of developers through design, code reviews, and technical mentorship.

• Set architectural direction and ensure scalability, modularity, and code quality.

• Work with stakeholders to translate business goals into robust technical solutions.

Backend Development & Data Engineering

• Design and build clean, high-performance backend services using FastAPI and Python

best practices.

• Handle row- and column-level data transformation using Pandas and NumPy.

• Apply data wrangling, cleansing, and preprocessing techniques across microservices and pipelines.

Database & Performance Optimization

• Write performant queries, procedures, and triggers using PostgreSQL and PL/pgSQL.

• Understand legacy logic in PL/SQL and participate in rewriting or modernizing it for PostgreSQL-based systems.

• Tune both backend and database performance, including memory, indexing, and query optimization.

Parallelism & Communication

• Implement multithreading, multiprocessing, and parallel data flows in Python.

• Integrate Kafka, RabbitMQ, or Pub/Sub systems for real-time and async message

processing.

Engineering Excellence

• Drive adherence to Agile, Git-based workflows, CI/CD, and DevOps pipelines.

• Promote testing (unit/integration), monitoring, and observability for all backend systems.

• Stay current with Python ecosystem evolution and introduce tools that improve productivity and performance.

What Makes You the Perfect Fit?

• 6–10 years of proven experience in Python development, with strong expertise in designing and delivering scalable backend solutions

Read more
Deqode

at Deqode

1 recruiter
Apoorva Jain
Posted by Apoorva Jain
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Nagpur, Ahmedabad, Jaipur, Kochi (Cochin)
3.6 - 8 yrs
₹4L - ₹18L / yr
skill iconPython
skill iconDjango
skill iconFlask
skill iconAmazon Web Services (AWS)
AWS Lambda
+3 more

Job Summary:

Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.


Key Responsibilities:

  • Design, develop, and deploy backend services and APIs using Python.
  • Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
  • Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
  • Implement containerized environments using Docker and manage orchestration via Kubernetes.
  • Write automation and scripting solutions in Bash/Shell to streamline operations.
  • Work with relational databases like MySQL and SQL, including query optimization.
  • Collaborate directly with clients to understand requirements and provide technical solutions.
  • Ensure system reliability, performance, and scalability across environments.


Required Skills:

  • 3.5+ years of hands-on experience in Python development.
  • Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
  • Good understanding of Terraform or other Infrastructure as Code tools.
  • Proficient with Docker and container orchestration using Kubernetes.
  • Experience with CI/CD tools like Jenkins or GitHub Actions.
  • Strong command of SQL/MySQL and scripting with Bash/Shell.
  • Experience working with external clients or in client-facing roles.


Preferred Qualifications:

  • AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
  • Familiarity with Agile/Scrum methodologies.
  • Strong analytical and problem-solving skills.
  • Excellent communication and stakeholder management abilities.


Read more
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Noida, Ghaziabad, Faridabad, Pune, Hyderabad, Mohali, Dehradun, Panchkula, Chennai
6 - 14 yrs
₹12L - ₹28L / yr
Test Automation (QA)
skill iconKubernetes
helm
skill iconDocker
skill iconAmazon Web Services (AWS)
+13 more

Job Title : Senior QA Automation Architect (Cloud & Kubernetes)

Experience : 6+ Years

Location : India (Multiple Offices)

Shift Timings : 12 PM to 9 PM (Noon Shift)

Working Days : 5 Days WFO (NO Hybrid)


About the Role :

We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.

You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.


Key Responsibilities :

  • Architect and maintain test automation frameworks for microservices.
  • Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
  • Ensure reliability, scalability, and observability of test systems.
  • Work closely with DevOps and Cloud teams to streamline automation infrastructure.

Mandatory Skills :

  • Kubernetes, Helm, Docker, Linux
  • Cloud Platforms : AWS / Azure / GCP
  • CI/CD Tools : Jenkins, GitHub Actions
  • Scripting : Python, Pytest, Bash
  • Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
  • IaC Practices : Terraform / Ansible

Good to Have :

  • Experience with Service Mesh (Istio/Linkerd).
  • Container Security or DevSecOps exposure.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort