Cutshort logo

50+ Python Jobs in India

Apply to 50+ Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!

icon
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
3 - 6 yrs
₹10L - ₹25L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconJava
skill iconAmazon Web Services (AWS)
skill iconDocker
+1 more

Job Description

Key Responsibilities

  • API & Service Development:
  • Build RESTful and GraphQL APIs for e-commerce, order management, inventory, pricing, and promotions.
  • Database Management:
  • Design efficient schemas and optimize performance across SQL and NoSQL data stores.
  • Integration Development:
  • Implement and maintain integrations with ERP (SAP B1, ERPNext), CRM, logistics, and third-party systems.
  • System Performance & Reliability:
  • Write scalable, secure, and high-performance code to support real-time retail operations.
  • Collaboration:
  • Work closely with frontend, DevOps, and product teams to ship new features end-to-end.
  • Testing & Deployment:
  • Contribute to CI/CD pipelines, automated testing, and observability improvements.
  • Continuous Improvement:
  • Participate in architecture discussions and propose improvements to scalability and code quality.



Requirements

Required Skills & Experience

  • 3–5 years of hands-on backend development experience in Node.jsPython, or Java.
  • Strong understanding of microservicesREST APIs, and event-driven architectures.
  • Experience with databases such as MySQL/PostgreSQL (SQL) and MongoDB/Redis (NoSQL).
  • Hands-on experience with AWS / GCP and containerization (Docker, Kubernetes).
  • Familiarity with GitCI/CD, and code review workflows.
  • Good understanding of API securitydata protection, and authentication frameworks.
  • Strong problem-solving skills and attention to detail.


Nice to Have

  • Experience in e-commerce or omnichannel retail platforms.
  • Exposure to ERP / OMS / WMS integrations.
  • Familiarity with GraphQLServerless, or Kafka / RabbitMQ.
  • Understanding of multi-brand or multi-country architecture challenges.


Read more
IBS Software Services
Kochi (Cochin)
8 - 12 yrs
₹20L - ₹25L / yr
skill iconPython
SQL
Marketing analytics

Role Overview

We are looking for a Senior Marketing Analytics professional with strong experience in Marketing Mix Modeling (MMM), Attribution Modeling, and ROI analysis. The role involves working closely with marketing and business leadership to deliver actionable insights that optimize marketing spend and drive business growth.

Key Responsibilities

  • Analyze large-scale marketing and customer datasets to deliver actionable business insights.
  • Build and maintain Marketing Mix Models (MMM) to measure media effectiveness and optimize marketing investments.
  • Design and implement attribution models (multi-touch, incrementality, lift analysis) to evaluate campaign performance.
  • Perform ROI, CAC, ROAS, and funnel analysis across marketing channels.
  • Write complex SQL queries to extract, combine, and analyze data from multiple sources.
  • Use Python for statistical analysis, regression modeling, forecasting, and experimentation.
  • Develop and publish Tableau dashboards and automated reports for leadership and stakeholders.
  • Work with marketing platforms such as Google Analytics (GA4), Adobe Analytics, Salesforce Marketing Cloud, Marketo, or similar tools.
  • Collaborate with cross-functional teams to define KPIs, reporting requirements, and analytics roadmaps.
  • Present insights and recommendations clearly to senior leadership and non-technical stakeholders.
  • Ensure data accuracy, consistency, and documentation of analytics methodologies.

Required Skills & Qualifications

  • 8+ years of experience in analytics, with a strong focus on marketing or digital analytics.
  • Hands-on expertise in Marketing Mix Modeling (MMM) and Attribution Modeling.
  • Strong proficiency in SQL and Python for data analysis.
  • Experience with Tableau for dashboarding and automated reporting.
  • Working knowledge of Google Analytics / GA4, Adobe Analytics, and marketing automation or CRM tools.
  • Strong understanding of data modeling, reporting, and ROI measurement.
  • Excellent stakeholder management, communication, and data storytelling skills.
  • Ability to work independently in a fast-paced and ambiguous environment.

Good to Have

  • Experience with Power BI / Looker / BigQuery
  • Exposure to A/B testing, experimentation, or econometric modeling
  • Experience working with large marketing datasets and cloud platforms

Role Overview

We are looking for a Senior Marketing Analytics professional with strong experience in Marketing Mix Modeling (MMM), Attribution Modeling, and ROI analysis. The role involves working closely with marketing and business leadership to deliver actionable insights that optimize marketing spend and drive business growth.

Key Responsibilities

  • Analyze large-scale marketing and customer datasets to deliver actionable business insights.
  • Build and maintain Marketing Mix Models (MMM) to measure media effectiveness and optimize marketing investments.
  • Design and implement attribution models (multi-touch, incrementality, lift analysis) to evaluate campaign performance.
  • Perform ROI, CAC, ROAS, and funnel analysis across marketing channels.
  • Write complex SQL queries to extract, combine, and analyze data from multiple sources.
  • Use Python for statistical analysis, regression modeling, forecasting, and experimentation.
  • Develop and publish Tableau dashboards and automated reports for leadership and stakeholders.
  • Work with marketing platforms such as Google Analytics (GA4), Adobe Analytics, Salesforce Marketing Cloud, Marketo, or similar tools.
  • Collaborate with cross-functional teams to define KPIs, reporting requirements, and analytics roadmaps.
  • Present insights and recommendations clearly to senior leadership and non-technical stakeholders.
  • Ensure data accuracy, consistency, and documentation of analytics methodologies.

Required Skills & Qualifications

  • 8+ years of experience in analytics, with a strong focus on marketing or digital analytics.
  • Hands-on expertise in Marketing Mix Modeling (MMM) and Attribution Modeling.
  • Strong proficiency in SQL and Python for data analysis.
  • Experience with Tableau for dashboarding and automated reporting.
  • Working knowledge of Google Analytics / GA4, Adobe Analytics, and marketing automation or CRM tools.
  • Strong understanding of data modeling, reporting, and ROI measurement.
  • Excellent stakeholder management, communication, and data storytelling skills.
  • Ability to work independently in a fast-paced and ambiguous environment.

Good to Have

  • Experience with Power BI / Looker / BigQuery
  • Exposure to A/B testing, experimentation, or econometric modeling
  • Experience working with large marketing datasets and cloud platforms


Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Nikita Sinha
Posted by Nikita Sinha
Mumbai
5 - 8 yrs
Upto ₹45L / yr (Varies
)
skill iconJava
skill iconPython
skill iconGoogle Analytics
Vector database
skill iconMachine Learning (ML)

We are looking for a Senior Backend Engineer to build and operate the core AI/ML-backed systems that power large-scale, consumer-facing products. You will work on production-grade AI runtimes, retrieval systems, and ML-adjacent backend infrastructure, making pragmatic tradeoffs across quality, latency, reliability, and cost.

This role is not an entry point into AI/ML. You are expected to already have hands-on experience shipping ML-backed backend systems in production.


At Proximity, you won’t just build APIs - you’ll own critical backend systems end-to-end, collaborate closely with Applied ML and Product teams, and help define the foundations that power intelligent experiences at scale.


Responsibilities -

  • Own and deliver end-to-end backend systems for AI product runtime, including orchestration, request lifecycle management, state/session handling, and policy enforcement.
  • Design and implement retrieval and memory primitives end-to-end — document ingestion, chunking strategies, embeddings generation, indexing, vector/hybrid search, re-ranking, caching, freshness, and deletion semantics.
  • Productionize ML workflows and interfaces, including feature and metadata services, online/offline parity, model integration contracts, and evaluation instrumentation.
  • Drive performance, reliability, and cost optimization, owning P50/P95 latency, throughput, cache hit rates, token and inference costs, and infrastructure efficiency.
  • Build observability by default, including structured logs, metrics, distributed tracing, guardrail signals, failure taxonomies, and reliable fallback paths.
  • Collaborate closely with Applied ML teams on model routing, prompt and tool schemas, evaluation datasets, and release safety gates.
  • Write clean, testable, and maintainable backend code, contributing to design reviews, code reviews, and operational best practices.
  • Take systems from design → build → deploy → operate, including on-call ownership and incident response.
  • Continuously identify bottlenecks and failure modes in AI-backed systems and proactively improve system robustness.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience.
  • 6–10 years of experience building backend systems in production, with 2–3+ years working on ML/AI-backed products such as search, recommendations, ranking, RAG pipelines, or AI assistants.
  • Strong practical understanding of ML system fundamentals, including embeddings, vector similarity, reranking, retrieval quality, and evaluation metrics (precision/recall, nDCG, MRR).
  • Proven experience implementing or operating RAG pipelines, covering ingestion, chunking, indexing, query understanding, hybrid retrieval, and rerankers.
  • Solid distributed systems fundamentals, including API design, idempotency, concurrency, retries, circuit breakers, rate limiting, and multi-tenant reliability.
  • Experience with common ML/AI platform components, such as feature stores, metadata systems, streaming or batch pipelines, offline evaluation jobs, and A/B measurement hooks.
  • Strong proficiency in backend programming languages and frameworks (e.g., Go, Java, Python, or similar) and API development.
  • Ability to work independently, take ownership of complex systems, and collaborate effectively with cross-functional teams.
  • Strong problem-solving, communication, and system-design skills.


Desired Skills -

  • Experience with agentic runtimes, including tool-calling or function-calling patterns, structured outputs, and production guardrails.
  • Hands-on exposure to vector and hybrid retrieval stacks such as FAISS, Milvus, Pinecone, or Elasticsearch.
  • Experience running systems on Kubernetes, with strong knowledge of observability stacks like OpenTelemetry, Prometheus, Grafana, and distributed tracing.
  • Familiarity with privacy, security, and data governance considerations for user and model data.

Benefits

  • Best in class compensation: We hire only the best, and we pay accordingly.
  • Proximity Talks: Meet engineers, designers, and product leaders — and learn from experts across domains.

Keep on learning with a world-class team: Work on real, production AI systems at scale, challenge yourself daily, and grow alongside some of the best minds in the industry.



Read more
Leading digital testing boutique firm

Leading digital testing boutique firm

Agency job
via Peak Hire Solutions by Dhara Thakkar
Delhi
5 - 8 yrs
₹11L - ₹15L / yr
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Software Testing (QA)
Natural Language Processing (NLP)
Analytics
+11 more

Review Criteria

  • Strong AI/ML Test Engineer
  • 5+ years of overall experience in Testing/QA
  • 2+ years of experience in testing AI/ML models and data-driven applications, across NLP, recommendation engines, fraud detection, and advanced analytics models
  • Must have expertise in validating AI/ML models for accuracy, bias, explainability, and performance, ensuring decisions are fair, reliable, and transparent
  • Must have strong experience to design AI/ML test strategies, including boundary testing, adversarial input simulation, and anomaly monitoring to detect manipulation attempts by marketplace users (buyers/sellers)
  • Proficiency in AI/ML testing frameworks and tools (like PyTest, TensorFlow Model Analysis, MLflow, Python-based data validation libraries, Jupyter) with the ability to integrate into CI/CD pipelines
  • Must understand marketplace misuse scenarios, such as manipulating recommendation algorithms, biasing fraud detection systems, or exploiting gaps in automated scoring
  • Must have strong verbal and written communication skills, able to collaborate with data scientists, engineers, and business stakeholders to articulate testing outcomes and issues.
  • Degree in Engineering, Computer Science, IT, Data Science, or a related discipline (B.E./B.Tech/M.Tech/MCA/MS or equivalent)
  • Candidate must be based within Delhi NCR (100 km radius)


Preferred

  • Certifications such as ISTQB AI Testing, TensorFlow, Cloud AI, or equivalent applied AI credentials are an added advantage.


Job Specific Criteria

  • CV Attachment is mandatory
  • Have you worked with large datasets for AI/ML testing?
  • Have you automated AI/ML testing using PyTest, Jupyter notebooks, or CI/CD pipelines?
  • Please provide details of 2 key AI/ML testing projects you have worked on, including your role, responsibilities, and tools/frameworks used.
  • Are you willing to relocate to Delhi and why (if not from Delhi)?
  • Are you available for a face-to-face round?


Role & Responsibilities

  • 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
  • Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
  • Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
  • Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
  • Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
  • Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
  • Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
  • Preferred Certifications ISTQB AI Testing TensorFlowCloud AI certifications or equivalent applied AI credentials


Ideal Candidate

  • 5 years’ experience in testing AIML models and data driven applications including natural language processing NLP recommendation engines fraud detection and advanced analytics models
  • Proven expertise in validating AI models for accuracy bias explainability and performance to ensure decisions eg bid scoring supplier ranking fraud detection are fair reliable and transparent
  • Handson experience in data validation and model testing ensuring training and inference pipelines align with business requirements and procurement rules
  • Strong skills in data science equipped to design test strategies for AI systems including boundary testing adversarial input simulation and dri monitoring to detect manipulation aempts by marketplace users buyers sellers
  • Proficient in data science for defining AIML testing frameworks and tools TensorFlow Model Analysis MLflow PyTest Python based data validation libraries Jupyter with ability to integrate into CICD pipelines
  • Business awareness of marketplace misuse scenarios such as manipulating recommendation algorithms biasing fraud detection systems or exploiting gaps in automated scoring
  • Education Certifications Bachelors masters in engineering CSIT Data Science or equivalent
  • Preferred Certifications ISTQB AI Testing TensorFlow Cloud AI certifications or equivalent applied AI credentials


Read more
Fluxon

at Fluxon

3 candid answers
Ariba Khan
Posted by Ariba Khan
Remote only
5 - 10 yrs
Upto ₹55L / yr (Varies
)
skill iconPython
Artificial Intelligence (AI)
Generative AI
Langraph
Langchain

Who we are

We’re Fluxon, a product development team founded by ex-Googlers and startup founders. We offer full-cycle software development from ideation and design to build and go-to-market. We partner with visionary companies, ranging from fast-growing startups to tech leaders like

Google and Stripe, to turn bold ideas into products with the power to transform the world.


About the role

As an AI Engineer at Fluxon, you’ll take the lead in designing, building and deploying AI-powered applications for our clients.


You'll be responsible for:

  • System Architecture: Design and implement end-to-end AI systems and their parts, including data ingestion, preprocessing, model inference, and output structuring
  • Generative AI Development: Build and optimize RAG (Retrieval-Augmented Generation) systems and Agentic workflows using frameworks like LangChain, LangGraph, ADK (Agent Development Kit), Genkit
  • Production Engineering: Deploy models to production environments (AWS/GCP/Azure) using Docker and Kubernetes, ensuring high availability and scalability
  • Evaluation & Monitoring: Implement feedback loops to evaluate model performance (accuracy, hallucinations, relevance) and set up monitoring for drift in production
  • Collaboration: Work closely with other engineers to integrate AI endpoints into the core product and with product managers to define AI capabilities
  • Model Optimization: Fine-tune open-source models (e.g., Llama, Mistral) for specific domain tasks and optimize them for latency and cost


You'll work with technologies including:

Languages

  • Python (Preferred)
  • Java / C++ / Scala / R / JavaScript

AI / ML

  • LangChain
  • LangGraph
  • Google ADK
  • Genkit
  • OpenAI API
  • LLM - Large Language Model
  • Vertex AI

Cloud & Infrastructure

  • Platforms: Google Cloud Platform (GCP) or Amazon Web Services (AWS)
  • Storage: Google Cloud Storage (GCS) or AWS S3
  • Orchestration: Temporal, Kubernetes
  • Data Stores
  • PostgreSQL
  • Firestore
  • MongoDB

Monitoring & Observability

  • GCP Cloud Monitoring Suite


Qualifications

  • 5+ years of industry experience in software engineering roles
  • Strong proficiency in Python or any preferred AI programming language such as Scala, Javascript and Java
  • Strong understanding of Transformer architectures, embeddings, and vector similarity search
  • Experience integrating with LLM provider APIs (OpenAI, Anthropic, Google Vertex AI)
  • Hands-on experience with agent workflows like LangChain, LangGraph
  • Experience with Vector Databases and traditional SQL / NoSQL databases
  • Familiarity with cloud platforms, preferably GCP or AWS
  • Understanding of patterns like RAG (Retrieval-Augmented Generation), few-shot prompting, and Fine-Tuning
  • Solid understanding of software development practices including version control (Git) and CI/CD

Nice to have:

  • Experience with Google Cloud Platform (GCP) services, specifically Vertex AI, Firestore,and Cloud Functions
  • Knowledge of prompt engineering techniques (Chain-of-Thought, ReAct, Tree of Thoughts)
  • Experience building "Agentic" workflows where AI can execute tools or API calls autonomously


What we offer

  • Exposure to high-profile SV startups and enterprise companies
  • Competitive salary
  • Fully remote work with flexible hours
  • Flexible paid time off
  • Profit-sharing program
  • Healthcare
  • Parental leave, including adoption and fostering
  • Gym membership and tuition reimbursement
  • Hands-on career development
Read more
Codemonk

at Codemonk

4 candid answers
2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
7yrs+
Upto ₹42L / yr (Varies
)
skill iconNodeJS (Node.js)
skill iconPython
Google Cloud Platform (GCP)
RESTful APIs
SQL
+4 more

Like us, you'll be deeply committed to delivering impactful outcomes for customers.

  • 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
  • Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
  • Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
  • Experience writing batch/cron jobs using Python and Shell scripting.
  • Experience in web application development using JavaScript and JavaScript libraries.
  • Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
  • Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
  • Understanding of code versioning tools such as Git.
  • Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
  • Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Amruta Mundale
Posted by Amruta Mundale
Pune
5 - 8 yrs
Best in industry
skill iconPython
SQL
Spark
airflow
pandas
+6 more

What You’ll Do:

As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York, India, and Bosnia. The role will focus on building predictive models, implementing data-driven solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to the measurement of campaign outcomes, Rx, patient journey, and supporting the evolution of the DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.

  • Explore ways to create better predictive models.
  • Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights.
  • Explore ways of using inference, statistical, and machine learning techniques to improve the performance of existing algorithms and decision heuristics.
  • Design and deploy new iterations of production-level code.
  • Contribute posts to our upcoming technical blog.

Who You Are:

  • Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, or Data Science.
  • 5+ years of working experience as a Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer-level predictive analytics.
  • Advanced proficiency in performing statistical analysis in Python, including relevant libraries, is required.
  • Experience working with data processing, transformation and building model pipelines using tools such as Spark, Airflow, and Docker.
  • You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications).
  • You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…).
  • You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing.
  • You can write production level code, work with Git repositories.
  • Active Kaggle participant.
  • Working experience with SQL.
  • Familiar with medical and healthcare data (medical claims, Rx, preferred).
  • Conversant with cloud technologies such as AWS or Google Cloud.


Read more
VRT Management Group
Archana Chakali
Posted by Archana Chakali
Hyderabad
1 - 5 yrs
₹2L - ₹7L / yr
skill iconNextJs (Next.js)
skill iconReact.js
skill iconMongoDB
skill icontailwindcss
skill iconNodeJS (Node.js)
+4 more

Job Title: Full Stack Developer with Design Expertise

Location: Santosh Nagar, Hyderabad, Telangana (On-site)

Employment Type: Full-Time

Company: VRT Management Group

 

About Us:

At VRT Management Group, we are a dynamic entrepreneurial consulting firm helping SMBs across the USA transform their people, processes, and strategies. As we expand our digital capabilities, we are seeking a skilled and driven Full Stack Developer to join our team full-time and take ownership of our web development and automation needs.

 

Key Responsibilities:

  • Website and Landing Page Hosting: Build, host, and maintain dynamic websites and high-converting landing pages that align with VRT’s brand identity and business objectives.
  • UI/UX Design: Design and implement user-friendly interfaces that ensure seamless navigation and deliver an exceptional user experience across all digital platforms.
  • Internal Tools Development: Design and develop intuitive, scalable internal tools to support various departments, improve operational workflows, and enhance cross-team productivity.
  • Automation Processes: Develop and integrate automation workflows to streamline business operations, enhancing productivity and efficiency.
  • Cross-Functional Collaboration: Work closely with marketing, design, and content teams to ensure seamless integration and performance of digital platforms.

 

Qualifications and Skills:

  • Proven experience as a Full Stack Developer, with a strong portfolio of web development projects.
  • Strong knowledge of Next.js / React
  • Experience with MongoDB and backend development (Node.js)
  • Proficiency in Tailwind CSS
  • Understanding of REST APIs, authentication, and state management
  • Familiarity with Git and deployment workflows
  • Strong problem-solving skills and the ability to work in a collaborative, fast-paced environment.
  • Bachelor’s degree in Computer Science, Information Technology, or a related field (preferred).

Nice to Have:

  • Experience with authentication (Cloudflare, JWT, OAuth)
  • Knowledge of cloud platforms (Vercel)

What We Offer:

  • A vibrant workplace where your contributions directly impact business success.
  • Opportunities to innovate and implement cutting-edge technologies.
  • The chance to grow with a company that values continuous learning and professional development.

 

Read more
AdElement

at AdElement

2 recruiters
Ritisha Nigam
Posted by Ritisha Nigam
Pune
2 - 5 yrs
₹3L - ₹7L / yr
adtech
SQL
skill iconJava
skill iconJavascript
skill iconPython

Company Description


AdElement is a leading digital advertising technology company that has been helping app publishers increase their ad revenue and reach untapped demand since 2011. With our expertise in connecting brands to app audiences on evolving screens, such as VR headsets and vehicle consoles, we enable our clients to be first to market. We have been recognized as the Google Agency of the Year and have offices globally, with our headquarters located in New Brunswick, New Jersey.


Job Description


Work alongside a highly skilled engineering team to design, develop, and maintain large-scale, highly performant, real-time applications.

Own building features, driving directly with product and other engineering teams.

Demonstrate excellent communication skills in working with technical and non-technical audiences.

Be an evangelist for best practices across all functions - developers, QA, and infrastructure/ops.

Be an evangelist for platform innovation and reuse.


Requirements:


2+ years of experience building large-scale and low-latency distributed systems.

Command of Java or C++.

Solid understanding of algorithms, data structures, performance optimization techniques, object-oriented programming, multi-threading, and real-time programming.

Experience with distributed caching, SQL/NO SQL, and other databases is a plus.

Experience with Big Data and cloud services such as AWS/GCP is a plus.

Experience in the advertising domain is a big plus.

B. S. or M. S. degree in Computer Science, Engineering, or equivalent.


Location: Pune, Maharashtra.





Read more
Proximity Works

at Proximity Works

1 video
5 recruiters
Nikita Sinha
Posted by Nikita Sinha
Mumbai, Navi Mumbai
3 - 5 yrs
Upto ₹40L / yr (Varies
)
skill iconPython
skill iconJava
skill iconScala
Data engineering
Hadoop
+2 more

We are looking for a Data Engineer to help build and scale the data pipelines and core datasets that power analytics, AI model evaluation, safety systems, and business decision-making across Bharat AI’s agentic AI platform.


This role sits at the heart of how data flows through the organization. You will work closely with Product, Data Science, Infrastructure, Marketing, Finance, and AI/Research teams to ensure data is reliable, accessible, and production-ready as the platform scales rapidly.

At Proximity, you won’t just move data — your work will directly influence how AI systems are trained, evaluated, monitored, and improved.


Responsibilities -

  • Design, build, and manage scalable data pipelines, ensuring user event data is reliably ingested into the data warehouse.
  • Develop and maintain canonical datasets to track key product metrics such as user growth, engagement, retention, and revenue.
  • Collaborate with Infrastructure, Data Science, Product, Marketing, Finance, and Research teams to understand data needs and deliver effective solutions.
  • Implement robust, fault-tolerant systems for data ingestion, transformation, and processing.
  • Participate actively in data architecture and engineering decisions, contributing best practices and long-term scalability thinking.
  • Ensure data security, integrity, and compliance in line with company policies and industry standards.
  • Monitor pipeline health, troubleshoot failures, and continuously improve reliability and performance.

Requirements

  • 3-5 years of professional experience working as a Data Engineer or in a similar role.
  • Proficiency in at least one data engineering programming language such as Python, Scala, or Java.
  • Experience with distributed data processing frameworks and technologies such as Hadoop, Flink, and distributed storage systems (e.g., HDFS).
  • Strong expertise with ETL orchestration tools, such as Apache Airflow.
  • Solid understanding of Apache Spark, with the ability to write, debug, and optimize Spark jobs.
  • Experience designing and maintaining data pipelines for analytics, reporting, or ML use cases.
  • Strong problem-solving skills and the ability to work across teams with varied data requirements.

Desired Skills -

  • Hands-on experience working with Databricks in production environments.
  • Familiarity with the GCP data stack, including Pub/Sub, Dataflow, BigQuery, and Google Cloud Storage (GCS).
  • Exposure to data quality frameworks, data validation, or schema management tools.
  • Understanding of analytics use cases, experimentation, or ML data workflows.

Benefits

  • Best in class compensation: We hire only the best, and we pay accordingly.
  • Proximity Talks: Learn from experienced engineers, data scientists, and product leaders.
  • High-impact work: Build data systems that directly power AI models and business decisions.
  • Continuous learning: Work with a strong, collaborative team and grow your data engineering skills every day.
Read more
Wama Technology
Mumbai
4 - 6 yrs
₹6L - ₹9L / yr
skill iconDjango
skill iconPython
skill iconFlask
FastAPI
RESTful APIs
+2 more

Job Title: Python Developer (4–6 Years Experience)

Location: Mumbai (Onsite)

Experience: 4–6 Years

Salary: ₹50,000 – ₹90,000 per month (depending on experience & skill set)

Employment Type: Full-time

Job Description

We are looking for an experienced Python Developer to join our growing team in Mumbai. The ideal candidate will have strong hands-on experience in Python development, building scalable backend systems, and working with databases and APIs.

Key Responsibilities

  • Design, develop, test, and maintain Python-based applications
  • Build and integrate RESTful APIs
  • Work with frameworks such as Django / Flask / FastAPI
  • Write clean, reusable, and efficient code
  • Collaborate with frontend developers, QA, and project managers
  • Optimize application performance and scalability
  • Debug, troubleshoot, and resolve technical issues
  • Participate in code reviews and follow best coding practices
  • Work with databases and ensure data security and integrity
  • Deploy and maintain applications in staging/production environments

Required Skills & Qualifications

  • 4–6 years of hands-on experience in Python development
  • Strong experience with Django / Flask / FastAPI
  • Good understanding of REST APIs
  • Experience with MySQL / PostgreSQL / MongoDB
  • Familiarity with Git and version control workflows
  • Knowledge of OOP concepts and design principles
  • Experience with Linux-based environments
  • Understanding of basic security and performance optimization
  • Ability to work independently as well as in a team

Good to Have (Preferred Skills)

  • Experience with AWS / cloud services
  • Knowledge of Docker / CI-CD pipelines
  • Exposure to microservices architecture
  • Basic frontend knowledge (HTML, CSS, JavaScript)
  • Experience working in an Agile/Scrum environment

Job Type: Full-time

Application Question(s):

  • If selected, how soon can you join?

Experience:

  • Total: 3 years (Required)
  • Python: 3 years (Required)

Location:

  • Mumbai, Maharashtra (Required)

Work Location: In person

Read more
Borderless Access

at Borderless Access

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
13yrs+
₹32L - ₹35L / yr
skill iconPython
skill iconJava
skill iconNodeJS (Node.js)
skill iconSpring Boot
skill iconJavascript
+14 more

About Borderless Access

Borderless Access is a company that believes in fostering a culture of innovation and collaboration to build and deliver digital-first products for market research methodologies. This enables our customers to stay ahead of their competition.

We are committed to becoming the global leader in providing innovative digital offerings for consumers backed by advanced analytics, AI, ML, and cutting-edge technological capabilities.

Our Borderless Product Innovation and Operations team is dedicated to creating a top-tier market research platform that will drive our organization's growth. To achieve this, we're embracing modern technologies and a cutting-edge tech stack for faster, higher-quality product development.

The Product Development team is the core of our strategy, fostering collaboration and efficiency. If you're passionate about innovation and eager to contribute to our rapidly evolving market research domain, we invite you to join our team.


Key Responsibilities

  • Lead, mentor, and grow a cross-functional team of engineers specializing.
  • Foster a culture of collaboration, accountability, and continuous learning.
  • Oversee the design and development of robust platform architecture with a focus on scalability, security, and maintainability.
  • Establish and enforce engineering best practices including code reviews, unit testing, and CI/CD pipelines.
  • Promote clean, maintainable, and well-documented code across the team.
  • Lead architectural discussions and technical decision-making, with clear and concise documentation for software components and systems.
  • Collaborate with Product, Design, and other stakeholders to define and prioritize platform features.
  • Track and report on key performance indicators (KPIs) such as velocity, code quality, deployment frequency, and incident response times.
  • Ensure timely delivery of high-quality software aligned with business goals.
  • Work closely with DevOps to ensure platform reliability, scalability, and observability.
  • Conduct regular 1:1s, performance reviews, and career development planning.
  • Conduct code reviews and provide constructive feedback to ensure code quality and maintainability.
  • Participate in the entire software development lifecycle, from requirements gathering to deployment and maintenance.


Added Responsibilities

  • Defining and adhering to the development process.
  • Taking part in regular external audits and maintaining artifacts.
  • Identify opportunities for automation to reduce repetitive tasks.
  • Mentor and coach team members in the teams.
  • Continuously optimize application performance and scalability.
  • Collaborate with the Marketing team to understand different user journeys.


Growth and Development

The following are some of the growth and development activities that you can look forward to at Borderless Access as an Engineering Manager:

  • Develop leadership skills – Enhance your leadership abilities through workshops or coaching from Senior Leadership and Executive Leadership.
  • Foster innovation – Become part of a culture of innovation and experimentation within the product development and operations team.
  • Drive business objectives – Become part of defining and taking actions to meet the business objectives.


About You

  • Bachelor's degree in Computer Science, Engineering, or a related field.
  • 8+ years of experience in software development.
  • Experience with microservices architecture and container orchestration.
  • Excellent problem-solving and analytical skills.
  • Strong communication and collaboration skills.
  • Solid understanding of data structures, algorithms, and software design patterns.
  • Solid understanding of enterprise system architecture patterns.
  • Experience in managing a small to medium-sized team with varied experiences.
  • Strong proficiency in back-end development, including programming languages like Python, Java, or Node.js, and frameworks like Spring or Express.
  • Strong proficiency in front-end development, including HTML, CSS, JavaScript, and popular frameworks like React or Angular.
  • Experience with databases (e.g., MySQL, PostgreSQL, MongoDB).
  • Experience with cloud platforms AWS, Azure, or GCP (preferred is Azure).
  • Knowledge of containerization technologies Docker and Kubernetes.


Read more
Industrial Automation Machinery

Industrial Automation Machinery

Agency job
via Michael Page by Pramod P
Bengaluru (Bangalore)
4 - 6 yrs
₹12L - ₹20L / yr
Backup
Enterprise storage
Veeam
Commvault
Linux
+3 more

Position Title - System Engineer Backup & Storage

Job Location - Bommasandra Industrial Area, Hosur Main Road, Bangalore

Working Days - Hybrid: 3 days WFO and 2 days WFH

Years of Experience - 3 to 5 yrs


Must-Have Skills 

  • Minimum 1 year experience with Veeam backup
  • Scripting or Automation tools such as PowerShell or Python
  • 2 years’ experience with enterprise storage solutions
  • 2 years’ experience with Linux-based operating systems
  • Must have experience with back up of VM Ware

Good-to-Have Skills 

  • Strong expertise in Commvault
  • Scripting or Automation tools such as PowerShell or Python


Your job:

  • Operate and maintain the backup environment using Veeam and Commvault, including backups, restore tests, retention policies, troubleshooting, and continuous improvements
  • Implement and manage backup and recovery measures to ensure data integrity, availability, and security of critical system
  • Operate and maintain global storage solutions including NetApp and IBM platforms, covering availability, performance monitoring, and capacity management
  • Monitor and analyze backup, storage, and system performance to identify optimization and automation opportunities
  • Propose and implement enhancements across backup, storage, virtualization, and hybrid infrastructure technologies to improve efficiency and reliability
  • Maintain accurate documentation and generate reports covering configurations, procedures, performance metrics, and capacity planning.


Your qualifications:

  • Bachelor’s degree in computer science, Information Technology, or a related field
  • Minimum 1 year experience with Veeam backup,
  • 2 years’ experience with enterprise storage solutions,
  • 2 years’ experience with Linux-based operating systems
  • Strong expertise in Commvault, NetApp, IBM Storage systems (including IBM SVC), Fibre Channel environments on Brocade devices, hybrid cloud architectures, networking concepts, SAN/NAS, deduplication, tiering, vSphere, and scripting or automation tools such as PowerShell or Python
  • Proficiency in Linux administration, backup and storage platforms, virtualization environments, and monitoring tools
  • Very good written and spoken English communication skills.
Read more
Teknobuilt Solutions Pvt Ltd
Navi Mumbai
3 - 6 yrs
₹8L - ₹10L / yr
skill iconPython
Selenium
skill iconJava
Agile/Scrum
QTP
+1 more

Teknobuilt is an innovative construction technology company accelerating Digital and AI platform to help all aspects of program management and execution for workflow automation, collaborative manual tasks and siloed systems. Our platform has received innovation awards and grants in Canada, UK and S. Korea and we are at the frontiers of solving key challenges in the built environment and digital health, safety and quality.

Teknobuilt's vision is helping the world build better- safely, smartly and sustainably. We are on a mission to modernize construction by bringing Digitally Integrated Project Execution System - PACE and expert services for midsize to large construction and infrastructure projects. PACE is an end-to-end digital solution that helps in Real Time Project Execution, Health and Safety, Quality and Field management for greater visibility and cost savings. PACE enables digital workflows, remote working, AI based analytics to bring speed, flow and surety in project delivery. Our platform has received recognition globally for innovation and we are experiencing a period of significant growth for our solutions.

 

Job Responsibilities

As a Quality Analyst Engineer, you will be expected to:

· Thoroughly analyze project requirements, design specifications, and user stories to understand the scope and objectives.

· Arrange, set up, and configure necessary test environments for effective test case execution.

· Participate in and conduct review meetings to discuss test plans, test cases, and defect statuses.

Execute manual test cases with precision, analyze results, and identify deviations from expected behavior.

· Accurately track, log, prioritize, and manage defects through their lifecycle, ensuring clear communication with developers until resolution.

· Maintain continuous and clear communication with the Test Manager and development team regarding testing progress, roadblocks, and critical findings.

· Develop, maintain, and manage comprehensive test documentation, including:

o Detailed Test Plans

o Well-structured Test Cases for various testing processes

o Concise Summary Reports on test execution and defect status

o Thorough Test Data preparation for test cases

o "Lessons Learned" documents based on testing inputs from previous projects

o "Suggestion Documents" aimed at improving overall software quality

o Clearly defined Test Scenarios

· Clearly report identified bugs to developers with precise steps to reproduce, expected results, and actual results, facilitating efficient defect resolution

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 8 yrs
Upto ₹30L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

Drive the design, automation, and reliability of Albert Invent’s core platform to support scalable, high-performance AI applications.

You will partner closely with Product Engineering and SRE teams to ensure security, resiliency, and developer productivity while owning end-to-end service operability.


Key Responsibilities

  • Own the design, reliability, and operability of Albert’s mission-critical platform.
  • Work closely with Product Engineering and SRE to build scalable, secure, and high-performance services.
  • Plan and deliver core platform capabilities that improve developer velocity, system resilience, and scalability.
  • Maintain a deep understanding of microservices topology, dependencies, and behavior.
  • Act as the technical authority for performance, reliability, and availability across services.
  • Drive automation and orchestration across infrastructure and operations.
  • Serve as the final escalation point for complex or undocumented production issues.
  • Lead root-cause analysis, mitigation strategies, and long-term system improvements.
  • Mentor engineers in building robust, automated, and production-grade systems.
  • Champion best practices in SRE, reliability, and platform engineering.

Must-Have Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience.
  • 4+ years of strong backend coding in Python or Node.js.
  • 4+ years of overall software engineering experience, including 2+ years in an SRE / automation-focused role.
  • Strong hands-on experience with Infrastructure as Code (Terraform preferred).
  • Deep experience with AWS cloud infrastructure and distributed systems (microservices, APIs, service-to-service communication).
  • Experience with observability systems – logs, metrics, and tracing.
  • Experience using CI/CD pipelines (e.g., CircleCI).
  • Performance testing experience using K6 or similar tools.
  • Strong focus on automation, standards, and operational excellence.
  • Experience building low-latency APIs (< 200ms response time).
  • Ability to work in fast-paced, high-ownership environments.
  • Proven ability to lead technically, mentor engineers, and influence engineering quality.

Good-to-Have Skills

  • Kubernetes and container orchestration experience.
  • Observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
  • Experience building Internal Developer Platforms (IDPs) or reusable engineering frameworks.
  • Exposure to ML infrastructure or data engineering pipelines.
  • Experience working in compliance-driven environments (SOC2, HIPAA, etc.).


Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Shivangi Ahuja
Posted by Shivangi Ahuja
Bengaluru (Bangalore)
4 - 13 yrs
Best in industry
Search Engine Optimization (SEO)
skill iconElastic Search
skill iconPython
skill iconNodeJS (Node.js)

To lead the design, development, and optimization of high-scale search and discovery systems

leveraging deep expertise in OpenSearch. The Search Staff Engineer will enhance search

relevance, query performance, and indexing efficiency by utilizing OpenSearch’s full-text, vector

search, and analytics capabilities. This role focuses on building real-time search pipelines,

implementing advanced ranking models, and architecting distributed indexing solutions to

deliver a high-performance, scalable, and intelligent search experience.


Responsibilities:

• Architect, develop, and maintain a scalable OpenSearch-based search infrastructure for high-traffic applications.

• Optimize indexing strategies, sharding, replication, and query execution to improve search performance and reliability.

• Implement cross-cluster search, multi-tenant search solutions, and real-time search capabilities.

• Ensure efficient log storage, retention policies, and lifecycle management in OpenSearch.

• Monitor and troubleshoot performance bottlenecks, ensuring high availability and resilience.

• Design and implement real-time and batch indexing pipelines for structured and unstructured data.

• Optimize schema design, field mappings, and tokenization strategies for improved search performance.

• Manage custom analyzers, synonyms, stopwords, and stemming filters for multilingual search.

• Ensure search infrastructure adheres to security best practices, including encryption,

access control, and audit logging.

• Optimize search for low latency, high throughput, and cost efficiency.

• Collaborate cross-functionally with engineering, product, and operations teams to ensure seamless platform delivery.

• Define and communicate a strategic roadmap for Search initiatives aligned with business goals.

• Work closely with stakeholders to understand database requirements and provide technical solutions.


Requirements:

• 4+ years of experience in search engineering, with at least 3+ years of deep experience in OpenSearch.

• Strong expertise in search indexing, relevance tuning, ranking algorithms, and query parsing.

• Hands-on experience with OpenSearch configurations, APIs, shards, replicas, and cluster scaling.

• Strong programming skills in Node.js and Python and experience with OpenSearch SDKs.

• Proficiency in REST APIs, OpenSearch DSL queries, and aggregation frameworks.

• Knowledge of observability, logging, and monitoring tools (Prometheus, OpenTelemetry, Grafana).

• Experience managing OpenSearch clusters on AWS OpenSearch, Containers, or self- hosted environments.

• Strong understanding of security best practices, role-based access control (RBAC), encryption, and IAM.

• Familiarity with multi-region, distributed search architectures.

• Strong analytical and debugging skills, with a proactive approach to identifying and mitigating risks.

• Exceptional communication skills, with the ability to influence and drive consensus among stakeholders.

Read more
Delhi
5 - 7 yrs
₹15L - ₹17L / yr
skill iconDjango
skill iconReact.js
skill iconHTML/CSS
skill iconAmazon Web Services (AWS)
Google Cloud Platform (GCP)
+7 more

Hey there budding tech wizard! Are you ready to take on a new challenge?

As a Senior Software Developer 1 (Full Stack) at Techlyticaly, you'll be responsible for solving problems and flexing your tech muscles to build amazing stuff, mentoring and guiding others. You'll work under the guidance of mentors and be responsible for developing high-quality, maintainable code modules that are extensible and meet the technical guidelines provided.


Responsibilities

We want you to show off your technical skills, but we also want you to be creative and think outside the box. Here are some of the ways you'll be flexing your tech muscles:

  • Use your superpowers to solve complex technical problems, combining your excellent abstract reasoning ability with problem-solving skills.
  • Efficient in at least one product or technology of strategic importance to the organisation, and a true tech ninja.
  • Stay up-to-date with emerging trends in the field, so that you can keep bringing fresh ideas to the table.
  • Implement robust and extensible code modules as per guidelines. We love all code that's functional (Don’t we?)
  • Develop good quality, maintainable code modules without any defects, exhibiting attention to detail. Nothing should look sus!
  • Manage assigned tasks well and schedule them appropriately for self and team, while providing visibility to the mentor and understanding the mentor's expectations of work. But don't be afraid to add your own twist to the work you're doing.
  • Consistently apply and improve team software development processes such as estimations, tracking, testing, code and design reviews, etc., but do it with a funky twist that reflects your personality.
  • Clarify requirements and provide end-to-end estimates. We all love it when requirements are clear (Don’t we?)
  • Participate in release planning and design complex modules & features.
  • Work with product and business teams directly for critical issue ownership. Isn’t it better when one of us understands what they say?
  • Feel empowered by managing deployments and assisting in infra management.
  • Act as role model for the team and guide them to brilliance. We all feel secured when we have someone to look up to.


Qualifications

We want to make sure you're a funky, tech-loving person with a passion for learning and growing. Here are some of the things we're looking for:

  • You have a Bachelor's or Master’s degree in Computer Science or a related field, but you also have a creative side that you're not afraid to show.
  • You have excellent abstract reasoning ability and a strong understanding of core computer science fundamentals.
  • You're proficient with web programming languages such as HTML, CSS, JavaScript with at least 5+ years of experience, but you're also open to learning new languages and technologies that might not be as mainstream.
  • You’ve 5+ years of experience with backend web framework Django and DRF.
  • You’ve 5+ years of experience with frontend web framework React.
  • Your knowledge of cloud service providers like AWS, GCP, Azure, etc. will be an added bonus.
  • You have experience with testing, code, and design reviews.
  • You have strong written and verbal communication skills, but you're also not afraid to show your personality and let your funky side shine through.
  • You can work independently and in a team environment, but you're also excited to collaborate with others and share your ideas.
  • You've demonstrated your ability to lead a small team of developers.
  • And most important, you're also excited to learn about new things and try out new ideas.


Compensation:

We know you're passionate and talented, and we want to reward you for that. That's why we're offering a compensation package of 15 - 17 LPA!

This is an mid-level position, you'll get to flex your coding muscles, work on exciting projects, and grow your skills in a fast-paced, dynamic environment. So, if you're passionate about all things tech and ready to take your skills to the next level, we want YOU to apply! Let's make some magic happen together!

We are located in Delhi. This post may require relocation.

Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Hyderabad
6 - 10 yrs
Upto ₹25L / yr (Varies
)
Artificial Intelligence (AI)
skill iconData Analytics
SQL
skill iconPython
Presales

About the role:

As a Presales Solution Architect, you will collaborate with our sales team to provide technical expertise and support throughout the sales process. You will play a crucial role in understanding customer requirements, presenting our solutions, and demonstrating the value of our products.


Responsibilities:


As a Solution Architect your expected to have all the below skills

  • Architecture & Design: Develop high-level architecture designs for scalable, secure, and robust solutions
  • Technology Evaluation: Select appropriate technologies, frameworks, and platforms for business needs.
  • Cloud & Infrastructure: Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP.
  • Integration: Ensure seamless integration between various enterprise applications, APIs, and third-party services.

Customer Centricity – Soft Skills

  • Communication - Strong written and oral communications skills
  • Interpersonal - Strong relationship building for getting work done efficiently
  • Intelligent Probing - Asking smart questions, in-depth understanding of the opp. Context
  • Collaboration - Effective use of tools - Presales templates, MS Teams, OneDrive, Calendars, etc. 
  • Presentation Skills - Strong presentation skills and experience presenting to C-Level executives 
  • Negotiation - Resource allocation, response timelines, estimates, resource loading, rates, etc.

 Agility – Responsiveness

  • Kickoff & bid planning - High reaction speed and quality; identification of the right team.
  • Adherence to the bid plan - High reaction speed and quality; identification of the right team
  • Follow-ups & reminders - Timely and appropriate reminders
  • Pro activeness - Quick risk identification & highlighting; early identification of key deliverables not directly asked for (e.g. mock-ups) early win theme identification with the help of key stakeholders.

Responsibility – Deliverable Quality & Process compliance

  • Language & articulation - Consistent, high-quality language across sections of a proposal; correctness of grammar, punctuation and appropriate capitalization 
  • Document structure - Standard template for general proposals; customized structure for specific requests like RFPs; story boarding for presentations 
  • Completeness & correctness - General completeness and according to requirements specified in solicitation documents; correctness of content - functional, technical, company data, process-related content, creative content like mock-ups, etc. 
  • Process compliance - Regular and timely updates, updating knowledge repository 
  • Aesthetics & document formatting - Quality, consistency/relevance of graphics - architecture diagrams, mock-ups, infographics, etc.; consistency of formatting (headings, fonts, indents, bullets, table and diagram captions, text alignment, etc.); adherence to Kanerika standards or client specified standards

 Intelligence – Value Addition

  • Providing intelligence to the bid team - Client’s industry, client (general info and their preferences for engagement model, development methodology), competition, "build vs buy" research (if applicable), tentative budget, etc. 
  • Empathy - Understanding customer’s deeper pain area/s (generally not specified in the solicitation documents) with research and intelligent queries 
  • Baking win themes - (at times subtly) in all sections of the deliverable/s and strong messaging in the executive summary.
  • Solution & estimation validation - Ability to guide and challenge the solution & the estimates 5) Quality of original exec summary -A strong first pass; with zero objective (factual) errors.


Global - leveraging Organizational Strength

  • Relevance & quality of case studies - Including all relevant success stories, 
  • Leveraging the best contributors - Specific context of each opportunity 
  • Effective use of past knowledge - From the past presales deliverables 
  • Effective use of support groups - Talent management, Information Security, Legal, HR, etc.


Read more
Rapid Canvas

at Rapid Canvas

1 candid answer
Nikita Sinha
Posted by Nikita Sinha
Remote only
6 - 10 yrs
Upto ₹60L / yr (Varies
)
skill iconJava
skill iconPython
skill iconGo Programming (Golang)
skill iconNodeJS (Node.js)
skill iconPHP

We are looking for back end engineering experts with the passion to take on new challenges in a high growth startup environment. If you love finding creative solutions to coding challenges using the latest tech stack such as Java 18+, Spring Boot 3+, then we would like to speak with you.

Roles & Responsibilities

  • You will be part of a team that focuses on building a world-class data science platform
  • Work closely with both product owners and architects to fully understand business requirements and the design philosophy
  • Optimize web and data applications for performance and scalability
  • Collaborate with automation engineering team to deliver high-quality deliverables within a challenging time frame
  • Produce quality code, raising the bar for team performance and speed
  • Recommend systems solutions by comparing advantages and disadvantages of custom development and purchased alternatives
  • Follow emerging technologies

Key Skills Required

  • Bachelor’s degree (or equivalent) in computer science
  • At least 6 years of experience in software development using Java / Python, SpringBoot, REST API and scalable microservice frameworks.
  • Strong foundation in computer science, algorithms, and web design
  • Experience in writing highly secure web applications
  • Knowledge of container/orchestration tools (Kubernetes, Docker, etc.) and UI frameworks (NodeJS, React)
  • Good development habits, including unit testing, CI, and automated testing
  • High growth mindset that challenges the status quo and focuses on unique ideas and solutions
  • Experience on working with dynamic startups / high intensity environment would be a Plus
  • Experience working with shell scripting, Github Actions, Unix and prominent cloud providers like GCP, Azure, S3 is a plus

Why Join Us

  • Drive measurable impact for Fortune 500 customers across the globe, helping them turn AI vision into operational value.
  • Be part of a category-defining AI company, pioneering a hybrid model that bridges agents and experts.
  • Own strategic accounts end-to-end and shape what modern AI success looks like.
  • Work with a cross-functional, high-performance team that values execution, clarity, and outcomes.
  • Globally competitive compensation and benefits tailored to your local market.
  • Recognized as a Top 5 Data Science and Machine Learning platform on G2 for customer satisfaction.

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹42L - ₹45L / yr
DevOps
skill iconPython
Shell Scripting
Infrastructure
Terraform
+16 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 2

- Industry: Ride-hailing

- Experience: 5-7 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

4.   Candidate must have experience in database migration from scratch 

5.   Must have a firm hold on the container orchestration tool Kubernetes

6.   Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

7.   Understanding programming languages like GO/Python, and Java

8.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

9.   Working experience on Cloud platform - AWS

10. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 5 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
6 - 9 yrs
₹47L - ₹50L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Terraform
+15 more

JOB DETAILS:

- Job Title: Lead DevOps Engineer

- Industry: Ride-hailing

- Experience: 6-9 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1.   Candidate must be from a product-based or scalable app-based start-ups company with experience handling large-scale production traffic.

2.   Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

3.   Candidate must have 2 years of experience as an lead (handling team of 3 to 4 members at least)

4.   Own end-to-end infrastructure right from non-prod to prod environment including self-managed

5.   Candidate must have Self experience in database migration from scratch 

6.   Must have a firm hold on the container orchestration tool Kubernetes

7.   Should have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

8.   Understanding programming languages like GO/Python, and Java

9.   Working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

10.   Working experience on Cloud platform -AWS

11. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

● Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs

● Codify our infrastructure

● Do what it takes to keep the uptime above 99.99%

● Understand the bigger picture and sail through the ambiguities

● Scale technology considering cost and observability and manage end-to-end processes

● Understand DevOps philosophy and evangelize the principles across the organization

● Strong communication and collaboration skills to break down the silos

 

Job Requirements:

● B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience

● Minimum 6 yrs of experience working as a DevOps/Infrastructure Consultant

● Must have a firm hold on the container orchestration tool Kubernetes

● Must have expertise in configuration management tools like Ansible, Terraform, Chef / Puppet

● Strong problem-solving skills, and ability to write scripts using any scripting language

● Understanding programming languages like GO/Python, and Java

● Comfortable working on databases like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

 

What’s there for you?

Company’s team handles everything – infra, tooling, and self-manages a bunch of databases, such as

● 150+ microservices with event-driven architecture across different tech stacks Golang/ java/ node

● More than 100,000 Request per second on our edge gateways

● ~20,000 events per second on self-managed Kafka

● 100s of TB of data on self-managed databases

● 100s of real-time continuous deployment to production

● Self-managed infra supporting

● 100% OSS

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 6 yrs
₹34L - ₹37L / yr
DevOps
skill iconPython
Shell Scripting
skill iconKubernetes
Monitoring
+18 more

JOB DETAILS:

- Job Title: Senior Devops Engineer 1

- Industry: Ride-hailing

- Experience: 4-6 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Cloud & Infrastructure Operations, Kubernetes & Container Orchestration, Monitoring, Reliability & Observability, Proficiency with Terraform, Ansible etc., Strong problem-solving skills with scripting (Python/Go/Shell)

 

Criteria:

1. Candidate must be from a product-based or scalable app-based startups company with experience handling large-scale production traffic.

2. Candidate must have strong Linux expertise with hands-on production troubleshooting and working knowledge of databases and middleware (Mongo, Redis, Cassandra, Elasticsearch, Kafka).

3. Candidate must have solid experience with Kubernetes.

4. Candidate should have strong knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

5. Candidate must be an individual contributor with strong ownership.

6. Candidate must have hands-on experience with DATABASE MIGRATIONS and observability tools such as Prometheus and Grafana.

7. Candidate must have working knowledge of Go/Python and Java.

8. Candidate should have working experience on Cloud platform - AWS

9. Candidate should have Minimum 1.5 years stability per organization, and a clear reason for relocation.

 

Description 

Job Summary:

As a DevOps Engineer at company, you will be working on building and operating infrastructure at scale, designing and implementing a variety of tools to enable product teams to build and deploy their services independently, improving observability across the board, and designing for security, resiliency, availability, and stability. If the prospect of ensuring system reliability at scale and exploring cutting-edge technology to solve problems, excites you, then this is your fit.

 

Job Responsibilities:

- Own end-to-end infrastructure right from non-prod to prod environment including self-managed DBs.

- Understanding the needs of stakeholders and conveying this to developers.

- Working on ways to automate and improve development and release processes.

- Identifying technical problems and developing software updates and ‘fixes’.

- Working with software developers to ensure that development follows established processes and works as intended.

- Do what it takes to keep the uptime above 99.99%.

- Understand DevOps philosophy and evangelize the principles across the organization.

- Strong communication and collaboration skills to break down the silos

 

Job Requirements:

- B.Tech. / B.E. degree in Computer Science or equivalent software engineering degree/experience.

- Minimum 4 yrs of experience working as a DevOps/Infrastructure Consultant.

- Strong background in operating systems like Linux.

- Understands the container orchestration tool Kubernetes.

- Proficient Knowledge of configuration management tools like Ansible, Terraform, and Chef / Puppet. Add on- Prometheus & Grafana etc.

- Problem-solving attitude, and ability to write scripts using any scripting language.

- Understanding programming languages like GO/Python, and Java.

- Basic understanding of databases and middlewares like Mongo/Redis/Cassandra/Elasticsearch/Kafka.

- Should be able to take ownership of tasks, and must be responsible. - Good communication skills

 

Read more
Ride-hailing Industry

Ride-hailing Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
4 - 7 yrs
₹18L - ₹21L / yr
skill iconData Analytics
skill iconPython
SQL
Data Visualization
Stakeholder management
+7 more

JOB DETAILS:

- Job Title: Senior Business Analyst

- Industry: Ride-hailing

- Experience: 4-7 years

- Working Days: 5 days/week

- Work Mode: ONSITE

- Job Location: Bangalore

- CTC Range: Best in Industry


Required Skills: Data Visualization, Data Analysis, Strong in Python and SQL, Cross-Functional Communication & Stakeholder Management


Criteria:

1. Candidate must have 4–7 years of experience in analytics / business analytics roles.

2. Candidate must be currently based in Bangalore only (no relocation allowed).

3. Candidate must have hands-on experience with Python and SQL.

4. Candidate must have experience working with databases/APIs (Mongo, Presto, REST or similar).

5. Candidate must have experience building dashboards/visualizations (Tableau, Metabase or similar).

6. Candidate must be available for face-to-face interviews in Bangalore.

7. Candidate must have experience working closely with business, product, and operations teams.


Description

Job Responsibilities:

● Acquiring data from primary/secondary data sources like mongo/presto/Rest APIs.

● Candidate must have strong hands-on experience in Python and SQL.

● Build visualizations to communicate data to key decision-makers and preferably familiar with building interactive dashboards in Tableau/Metabase

● Establish relationship between output metric and its drivers in order to identify critical drivers and control the critical drivers so as to achieve the desired value of output metric

● Partner with operations/business teams to consult, develop and implement KPIs, automated reporting/process solutions, and process improvements to meet business needs

● Collaborating with our business owners + product folks and perform data analysis of experiments and recommend the next best action for the business. Involves being embedded into business decision teams for driving faster decision making

● Collaborating with several functional teams within the organization and use raw data and metrics to back up assumptions, develop hypothesis/business cases and complete root cause analyses; thereby delivering output to business users

 

Job Requirements:

● Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative field.

● Around 4-6 years of experience being embedded in analytics and adjacent business teams working as analyst aiding decision making

● Proficiency in Excel and ability to structure and present data in creative ways to drive insights

● Some basic understanding of (or experience in) evaluating financial parameters like return-on-investment (ROI), cost allocation, optimization, etc. is good to have

👉 ● Candidate must have strong hands-on experience in Python and SQL.

What’s there for you?

● Opportunity to understand the overall business & collaborate across all functional departments

● Prospect to disrupt the existing mobility industry business models (ideate, pilot, monitor & scale)

● Deal with the ambiguity of decision making while balancing long-term/strategic business needs and short-term/tactical moves

● Full business ownership working style which translates to freedom to pick problem statements/workflow and self-driven culture

Read more
GrowthArc

at GrowthArc

2 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Bengaluru (Bangalore)
5yrs+
Best in industry
skill iconAmazon Web Services (AWS)
Migration
skill iconJava
skill iconPython
skill iconGo Programming (Golang)
+4 more

About the role

We are looking for an experienced AWS Cloud Engineer with strong Java and Python/Golang expertise to design, modernize, and migrate applications and infrastructure to AWS. The ideal candidate will have hands-on experience with cloud-native development, Java application modernization, and end-to-end AWS migrations, with a strong focus on scalability, security, performance, and cost optimization.

This role involves working across application migration, cloud-native development, and infrastructure automation, collaborating closely with DevOps, security, and product teams.


Key Responsibilities

  • Lead and execute application and infrastructure migrations from on-premises or other cloud platforms to AWS
  • Assess legacy Java-based applications and define migration strategies (rehost, re-platform, refactor)
  • Design and develop cloud-native applications and services using Java, Python, or Golang
  • Modify and optimize applications for AWS readiness and scalability
  • Design and implement AWS-native architectures ensuring high availability, security, and cost efficiency
  • Build and maintain serverless and containerized solutions on AWS
  • Develop RESTful APIs and microservices for system integrations
  • Implement Infrastructure as Code (IaC) using CloudFormation, Terraform, or AWS CDK
  • Support and improve CI/CD pipelines for deployment and migration activities
  • Plan and execute data migration, backup, and disaster recovery strategies
  • Monitor, troubleshoot, and resolve production and migration-related issues with minimal downtime
  • Ensure adherence to AWS security best practices, governance, and compliance standards
  • Create and maintain architecture diagrams, runbooks, and migration documentation
  • Perform post-migration validation, performance tuning, and optimization

Required Skills & Experience

  • 5–10 years of overall IT experience with strong AWS exposure
  • Hands-on experience with AWS services, including:
  • EC2, Lambda, S3, RDS
  • ECS / EKS, API Gateway
  • VPC, Subnets, Route Tables, Security Groups
  • IAM, Load Balancers (ALB/NLB), Auto Scaling
  • CloudWatch, SNS, CloudTrail
  • Strong development experience in Java (8+), Python, or Golang
  • Experience migrating Java applications (Spring / Spring Boot preferred)
  • Strong understanding of cloud-native, serverless, and microservices architectures
  • Experience with CI/CD tools (Jenkins, GitHub Actions, GitLab CI, etc.)
  • Hands-on experience with Linux/UNIX environments
  • Proficiency with Git-based version control
  • Strong troubleshooting, analytical, and problem-solving skills

Good to Have / Nice to Have

  • Experience with Docker and Kubernetes (EKS)
  • Knowledge of application modernization patterns
  • Experience with Terraform, CloudFormation, or AWS CDK
  • Database experience: MySQL, PostgreSQL, Oracle, DynamoDB
  • Understanding of the AWS Well-Architected Framework
  • Experience in large-scale or enterprise migration programs
  • AWS Certifications (Developer Associate, Solutions Architect, or Professional)

Education

  • Bachelor’s degree in Computer Science, Engineering, or a related field
Read more
Techgenzi Private Limited
Coimbatore
2 - 5 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconMachine Learning (ML)
RESTful APIs
Artificial Intelligence (AI)
Large Language Models (LLM) tuning
+5 more

Role: Senior AI Engineer

Work Location: TechGenzi Coimbatore Office (ODC for Tiramai.ai)

Employment Type: Full-time

Experience: 2–5 years (Full-stack development with AI exposure)


About the Role & Work Location.

The selected candidate will be employed by Tiramai.ai and will work exclusively on Tiramai.ai projects. The role will be based out of TechGenzi’s Coimbatore office, which functions as an Offshore Development Center (ODC) supporting Tiramai.ai’s product and engineering initiatives.

Primary Focus

As an AI Engineer at our enterprise SaaS and AI-native organization, you will play a pivotal role in building secure, scalable, and intelligent digital solutions. This role combines full-stack development expertise with applied AI skills to create next-generation platforms that empower enterprises to modernize and act smarter with AI. You will work on AI-driven features, APIs, and cloud-native applications that are production-ready, compliance-conscious, and aligned with our mission of delivering responsible AI innovation.



Key Responsibilities

  • Design, develop, and maintain full-stack applications using Python (backend) and React/Angular (frontend).
  • Build and integrate AI-driven modules, leveraging GenAI, ML models, and AI-native tools into enterprise-grade SaaS products.
  • Develop scalable REST APIs and microservices with security, compliance, and performance in mind.
  • Collaborate with architects, product managers, and cross-functional teams to translate requirements into production-ready features.
  • Ensure adherence to secure coding standards, data privacy regulations, and human-in-the-loop AI principles.
  • Participate in code reviews, system design discussions, and continuous integration/continuous deployment (CI/CD) practices.
  • Contribute to reusable libraries, frameworks, and best practices to accelerate AI platform development.


Skills Required

  • Strong proficiency in Python for backend development.
  • Frontend expertise in React.js or Angular with 2+ years of experience.
  • Hands-on experience in full SDLC development (design, build, test, deploy, maintain).
  • Familiarity with AI/ML frameworks (e.g., TensorFlow, PyTorch) or GenAI tools (LangChain, vector DBs, OpenAI APIs).
  • Knowledge of cloud-native development (AWS/Azure/GCP), Docker, Kubernetes, and CI/CD pipelines.
  • Strong understanding of REST APIs, microservices, and enterprise-grade security standards.
  • Ability to work collaboratively in fast-paced, cross-functional teams with strong problem-solving and analytical skills.
  • Exposure to responsible AI principles (explainability, bias mitigation, compliance) is a plus.


Growth Path

  • AI Engineer (24 years) focus on full-stack + AI integration, delivering production-ready features.
  • Senior AI Engineer (4–6 years) lead modules, mentor juniors, and drive AI feature development at scale.
  • Lead AI Engineer (6–8 years) own solution architecture for AI features, ensure security/compliance, collaborate closely with product/tech leaders.
  • AI Architect / Engineering Manager (8+ years) shape AI platform strategy, guide large-scale deployments, and influence product/technology roadmap.
Read more
Indore, Chennai
3 - 7 yrs
₹5L - ₹15L / yr
Data engineering
skill iconPython
Apache
databricks

Required Skills & Qualifications

Technical Skills

  • Strong hands-on experience with Databricks and Apache Spark.
  • Proficiency in Python and SQL.
  • Proven experience in data mapping, transformation, and data modeling.
  • Experience integrating data from APIs, databases, and cloud storage.
  • Solid understanding of ETL/ELT concepts and data warehousing principles.


Key Responsibilities

Data Source Identification & Quality Assessment

Data Mapping & Integration

  • Define and maintain comprehensive data mapping between source systems and Databricks tables.
  • Design and implement scalable ETL/ELT pipelines using Databricks and Apache Spark.

Databricks & Data Modeling

  • Develop and optimize Databricks workloads using Spark and Delta Lake.
  • Design efficient data models optimized for performance, analytics, and API consumption.


Read more
Rapid Canvas

at Rapid Canvas

1 candid answer
Nikita Sinha
Posted by Nikita Sinha
Remote only
4 - 8 yrs
Upto ₹60L / yr (Varies
)
skill iconPython
Large Language Models (LLM) tuning
Pipeline management
Systems design
Artificial Intelligence (AI)

We are seeking a highly motivated and skilled AI Engineer. You will have strong fundamentals in applied machine learning. You will have a passion for building and deploying production-grade AI solutions for enterprise clients. You will be a key technical expert and the face of our company. You will directly interface with customers to design, build, and deliver cutting-edge AI applications. This is a customer-facing role. It requires a balance of deep technical expertise and excellent communication skills.

Roles & Responsibilities

Design & Deliver AI Solutions

  • Interact directly with customers.
  • Understand their business requirements.
  • Translate them into robust, production-ready AI solutions.
  • Manage AI projects with the customer's vision in mind.
  • Build long-term, trusted relationships with clients.

Build & Integrate Agents

  • Architect, build, and integrate intelligent agent systems.
  • Automate IT functions and solve specific client problems.
  • Use expertise in frameworks like LangChain or LangGraph to build multi-step tasks.
  • Integrate these custom agents directly into the RapidCanvas platform.

Implement LLM & RAG Pipelines

  • Develop grounding pipelines with retrieval-augmented generation (RAG).
  • Contextualize LLM behavior with client-specific knowledge.
  • Build and integrate agents with infrastructure signals like logs and APIs.

Collaborate & Enable

  • Work with customer data science teams.
  • Collaborate with other internal Solutions Architects, Engineering, and Product teams.
  • Ensure seamless integration of AI solutions.
  • Serve as an expert on the RapidCanvas platform.
  • Enable and support customers in building their own applications.
  • Act as a Product Champion, providing crucial feedback to the product team to drive innovation.

Data & Model Management

  • Oversee the entire AI project lifecycle.
  • Start from data preprocessing and model development.
  • Finish with deployment, monitoring, and optimization.

Champion Best Practices

  • Write clean, maintainable Python code.
  • Champion engineering best practices.
  • Ensure high performance, accuracy, and scalability.

Key Skills Required

Experience

  • Minimum 5+ years of hands-on experience in AI/ML engineering or backend systems.
  • Recent exposure to LLMs or intelligent agents is a must.

Technical Expertise

  • Proficiency in Python.
  • Proven track record of building scalable backend services or APIs.
  • Expertise in machine learning, deep learning, and Generative AI concepts.
  • Hands-on experience with LLM platforms (e.g., GPT, Gemini).
  • Deep understanding of and hands-on experience with agentic frameworks like LangChain, LangGraph, or CrewAI.
  • Experience with vector databases (e.g., Pinecone, Weaviate, FAISS).

Customer & Communication Skills

  • Proven ability to partner with enterprise stakeholders.
  • Excellent presentation skills.
  • Comfortable working independently.
  • Manage multiple projects simultaneously.

Preferred Skills

  • Experience with cloud platforms (e.g., AWS, Azure, Google Cloud).
  • Knowledge of MLOps practices.
  • Experience in the AI services industry or startup environments.

Why Join us

  • High-impact opportunity: Play a pivotal role in building a new business vertical within a rapidly growing AI company.
  • Strong leadership & funding: Backed by top-tier investors, our leadership team has deep experience scaling AI-driven businesses.
  • Recognized as a top 5 Data Science and Machine Learning platform by independent research firm G2 for customer satisfaction.


Read more
PGAGI
Javeriya Shaik
Posted by Javeriya Shaik
Remote only
0 - 0.6 yrs
₹2L - ₹2L / yr
skill iconPython
Large Language Models (LLM)
Natural Language Processing (NLP)
skill iconDeep Learning
FastAPI
+1 more

We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.


Duration: 6 months


Perks:

- Hands-on experience with real AI projects.

- Mentoring from industry experts.

- A collaborative, innovative and flexible work environment

After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).


Compensation:

- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.

- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.

Key Responsibilities

  • Experience working with python, LLM, Deep Learning, NLP, etc..
  • Utilize GitHub for version control, including pushing and pulling code updates.
  • Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
  • Engage in prompt engineering and the fine-tuning process of AI models.

Requirements

  • Proficiency in Python programming.
  • Experience with GitHub and version control workflows.
  • Familiarity with AI platforms such as Hugging Face and OpenAI.
  • Understanding of prompt engineering and model fine-tuning.
  • Excellent problem-solving abilities and a keen interest in AI technology.


To Apply Click below link and submit the Assignment

https://pgagi.in/jobs/28df1e98-f0c3-4d58-9509-d5b1a4ea9754

Read more
Appiness Interactive Pvt. Ltd.
Bengaluru (Bangalore), Mumbai
6 - 10 yrs
₹8L - ₹28L / yr
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Retrieval Augmented Generation (RAG)
skill iconJava
skill iconPython
+3 more

Company Description

Appiness Interactive Pvt. Ltd. is a Bangalore-based product development and UX firm that specializes in digital services for startups to fortune-500s. We work closely with our clients to create a comprehensive soul for their brand in the online world, engaged through multiple platforms of digital media. Our team is young, passionate, and aggressive, not afraid to think out of the box or tread the un-trodden path in order to deliver the best results for our clients. We pride ourselves on Practical Creativity where the idea is only as good as the returns it fetches for our clients.


Position Overview:

Senior backend engineering role focused on building and operating ML-backed backend systems powering a large-scale AI product. This is a core foundation/platform role with end-to-end system ownership in a fast-moving, ambiguous environment within a high-intent foundation

engineering pod of 10 engineers.


Key Responsibilities:

● Design, build, and operate ML-backed backend systems at scale

● Own runtime orchestration, session/state management, and retrieval/memory pipelines (chunking, embeddings, indexing, vector search, re-ranking, caching, freshness & deletion)

● Productionize ML workflows: feature/metadata services, model integration contracts, offline/online parity, and evaluation instrumentation

● Drive performance, reliability, and cost efficiency across latency, throughput, infra usage,

and token economics

● Build observability-first systems with tracing, metrics, logs, guardrails, and fallback paths

● Partner closely with applied ML teams on prompt/tool schemas, routing, evaluation

datasets, and safe releases

● Ship independently and own systems end-to-end


Required Skills:

● 6+ years of backend/platform engineering experience

● Strong experience building distributed, production-grade systems

● Hands-on exposure to ML-adjacent systems (serving, retrieval, orchestration, inference pipelines)

● Proven ownership of reliability, performance, and cost optimization in production

● Must be based in Mumbai or Bangalore

● Ability to work mandatory in-office


Preferred (Bonus) Skills:

● Experience with greenfield AI platform development

● Already based in Mumbai

● Experience working with US enterprise clients

● Foundation/platform engineering background

Read more
Verix

at Verix

5 candid answers
1 video
Eman Khan
Posted by Eman Khan
Remote only
4 - 8 yrs
₹15L - ₹30L / yr
Search Engine Optimization (SEO)
skill iconPython
skill iconNodeJS (Node.js)
skill iconJava
SEMRush
+1 more

About OptimizeGEO

OptimizeGEO.ai is our flagship product that helps brands stay visible and discoverable in AI-powered answers. Unlike traditional SEO that optimizes for keywords and rankings, OptimizeGEO operationalizes GEO principles ensuring brands are mentioned, cited, and trusted by generative systems (ChatGPT, Gemini, Perplexity, Claude, etc.) and answer engines (featured snippets, voice search, and AI answer boxes).


Founded by industry veterans Kirthiga Reddy (ex-Meta, MD Facebook India) and Saurabh Doshi (ex-Meta, ex-Viacom), Company is backed by Micron Ventures, Better Capital, FalconX, and leading angels including Randi Zuckerberg, Vani Kola and Harsh Jain.


Role Overview

We are hiring a Senior Backend Engineer to build the data and services layer that powers OptimizeGEO’s analytics, scoring, and reporting. This role partners closely with our GEO/AEO domain experts and data teams to translate framework gap analysis, share-of-voice, entity/knowledge-graph coverage, trust signals into scalable backend systems and APIs.


You will design secure, reliable, and observable services that ingest heterogeneous web and third-party data, compute metrics, and surface actionable insights to customers via dashboards and reports.


Key Responsibilities

  • Own backend services for data ingestion, processing, and aggregation across crawlers, public APIs, search consoles, analytics tools, and third-party datasets.
  • Operationalize GEO/AEO metrics (visibility scores, coverage maps, entity health, citation/trust signals, competitor benchmarks) as versioned, testable algorithms.
  • Data Scaping using various tools and working on volume estimates for accurate consumer insights for brands
  • Design & implement APIs for internal use (data science, frontend) and external consumption (partner/export endpoints), with clear SLAs and quotas.
  • Data pipelines & orchestration: batch and incremental jobs, queueing, retries/backoff, idempotency, and cost-aware scaling.
  • Storage & modeling: choose fit-for-purpose datastores (OLTP/OLAP), schema design, indexing/partitioning, lineage, and retention.
  • Observability & reliability: logging, tracing, metrics, alerting; SLOs for freshness and accuracy; incident response playbooks.
  • Security & compliance: authN/authZ, secrets management, encryption, PII governance, vendor integrations.
  • Collaborate cross-functionally with domain experts to convert research into productized features and executive-grade reports.


Required Qualification (Must Have)

  • Familiarity with technical SEO artifacts: schema.org/structured data, E-E-A-T, entity/knowledge-graph concepts, and crawl budgets.
  • Exposure to AEO/GEO and how LLMs weigh sources, citations, and trust; awareness of hallucination risks and mitigation.
  • Experience integrating SEO/analytics tools (Google Search Console, Ahrefs, SEMrush, Similarweb, Screaming Frog) and interpreting their data models.
  • Background in digital PR/reputation signals and local/international SEO considerations.
  • Comfort working with analysts to co-define KPIs and build executive-level reporting.


Expected Qualifications

  • 4+ years of experience building backend systems in production (startups or high-growth product teams preferred).
  • Proficiency in one or more of: Python, Node.js/TypeScript, Go, or Java.
  • Experience with cloud platforms (AWS/GCP/Azure) and containerized deployment (Docker, Kubernetes).
  • Hands-on with data pipelines (Airflow/Prefect, Kafka/PubSub, Spark/Flink or equivalent) and REST/GraphQL API design.
  • Strong grounding in systems design, scalability, reliability, and cost/performance trade-offs.


Tooling & Stack (Illustrative)

  • Runtime: Python/TypeScript/Go
  • Data: Postgres/BigQuery + object storage (S3/GCS)
  • Pipelines: Airflow/Prefect, Kafka/PubSub
  • Infra: AWS/GCP, Docker, Kubernetes, Terraform
  • Observability: OpenTelemetry, Prometheus/Grafana, ELK/Cloud Logging
  • Collab: GitHub, Linear/Jira, Notion, Looker/Metabase


Working Model

  • Hybrid-remote within India with limited periodic in-person collaboration
  • Startup velocity with pragmatic processes; bias to shipping, measurement, and iteration.


Equal Opportunity

OptimizeGEO is an equal-opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Read more
Grey Chain Technology

at Grey Chain Technology

5 candid answers
Deebaj Mir
Posted by Deebaj Mir
Remote only
7 - 10 yrs
₹18L - ₹24L / yr
skill iconPython
FastAPI
Generative AI
AI Agents
skill iconAmazon Web Services (AWS)
+1 more

Company: Grey Chain AI

Location: Remote

Experience: 7+ Years

Employment Type: Full Time


About the Role

We are looking for a Senior Python AI Engineer who will lead the design, development, and delivery of production-grade GenAI and agentic AI solutions for global clients. This role requires a strong Python engineering background, experience working with foreign enterprise clients, and the ability to own delivery, guide teams, and build scalable AI systems.


You will work closely with product, engineering, and client stakeholders to deliver high-impact AI-driven platforms, intelligent agents, and LLM-powered systems.

Key Responsibilities

  • Lead the design and development of Python-based AI systems, APIs, and microservices.
  • Architect and build GenAI and agentic AI workflows using modern LLM frameworks.
  • Own end-to-end delivery of AI projects for international clients, from requirement gathering to production deployment.
  • Design and implement LLM pipelines, prompt workflows, and agent orchestration systems.
  • Ensure reliability, scalability, and security of AI solutions in production.
  • Mentor junior engineers and provide technical leadership to the team.
  • Work closely with clients to understand business needs and translate them into robust AI solutions.
  • Drive adoption of latest GenAI trends, tools, and best practices across projects.

Must-Have Technical Skills

  • 7+ years of hands-on experience in Python development, building scalable backend systems.
  • Strong experience with Python frameworks and libraries (FastAPI, Flask, Pydantic, SQLAlchemy, etc.).
  • Solid experience working with LLMs and GenAI systems (OpenAI, Claude, Gemini, open-source models).
  • Good experience with agentic AI frameworks such as LangChain, LangGraph, LlamaIndex, or similar.
  • Experience designing multi-agent workflows, tool calling, and prompt pipelines.
  • Strong understanding of REST APIs, microservices, and cloud-native architectures.
  • Experience deploying AI solutions on AWS, Azure, or GCP.
  • Knowledge of MLOps / LLMOps, model monitoring, evaluation, and logging.
  • Proficiency with Git, CI/CD, and production deployment pipelines.


Leadership & Client-Facing Experience

  • Proven experience leading engineering teams or acting as a technical lead.
  • Strong experience working directly with foreign or enterprise clients.
  • Ability to gather requirements, propose solutions, and own delivery outcomes.
  • Comfortable presenting technical concepts to non-technical stakeholders.


What We Look For

  • Excellent communication, comprehension, and presentation skills.
  • High level of ownership, accountability, and reliability.
  • Self-driven professional who can operate independently in a remote setup.
  • Strong problem-solving mindset and attention to detail.
  • Passion for GenAI, agentic systems, and emerging AI trends.


Why Grey Chain AI

Grey Chain AI is a Generative AI-as-a-Service and Digital Transformation company trusted by global brands such as UNICEF, BOSE, KFINTECH, WHO, and Fortune 500 companies. We build real-world, production-ready AI systems that drive business impact across industries like BFSI, Non-profits, Retail, and Consulting.

Here, you won’t just experiment with AI — you will build, deploy, and scale it for the real world.



Read more
Flycatch infotech PVT LTD
Flycatch Recruitment
Posted by Flycatch Recruitment
Remote only
3 - 4 yrs
₹8.4L - ₹9.6L / yr
skill iconJava
skill iconPython

1. Minimum of 3 years of experience in ERPNext development, with a strong understanding of *ERPNext framework and customization. *

2. Proficiency in Python, JavaScript, HTML, CSS, and Frappe framework. Experience with ERPNext’s core modules such as Accounting, Sales, Purchase, Inventory, and HR is essential.

3. Experience with *MySQL or MariaDB databases. *

Read more
SPI Aviation Support Services Pvt Ltd
Mohammed Sadiq Isham A S
Posted by Mohammed Sadiq Isham A S
Chennai
0 - 0 yrs
₹4 - ₹5 / mo
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
Computer Vision
skill iconPython

AI / ML Intern (On-site) – Aviation Industry

Company: SPI Aviation Support Services

Location: Chennai, India

Job Type: Internship (On-site, Full-time)

Duration: 3 Months (April–June)


Eligibility :Engineering students specializing in Computer Science, Artificial Intelligence, Data Science, Machine Learning, or related disciplines. Candidates graduating in 2026 are preferred


Stipend: 5000 PM


Post-Internship Opportunity: Based on performance, interns may be considered for full-time employment with a compensation package of up to ₹6 LPA.


About Us

SPI Aviation Support Services, founded in October 2023, is an AI-driven aviation solutions provider based in Chennai. We support global aviation engine MROs by optimizing operations, compliance readiness, and asset performance using advanced AI-enabled technologies.


Internship Role

We are seeking motivated AI / ML Interns to work on real-world applications of artificial intelligence within the aviation domain. Interns will collaborate with experienced professionals and contribute to live projects supporting international clients.


  • Candidates available for on-site, full-time engagement during the internship period


Preferred Skills

  • Strong fundamentals in Artificial Intelligence and Machine Learning
  • Basic understanding of Computer Vision concepts
  • Familiarity with Python and data handling (good to have)

What You Will Gain

  • Hands-on experience with AI applications in the aviation industry
  • Exposure to data analysis, automation, and intelligent systems
  • Understanding of global aviation engine MRO workflows
  • Mentorship from experienced industry professionals
  • Experience working on live, production-impacting projects

Work Location

  • On-site at our Chennai office

This internship is ideal for students looking to apply AI/ML concepts in a high-impact, industry-driven environment.



Job Type: Internship

Contract length: 3 months

Work Location: In person

Read more
Virtana

at Virtana

2 candid answers
Krutika Devadiga
Posted by Krutika Devadiga
Chennai
6 - 10 yrs
Best in industry
skill iconPython
pandas
SciPy
NumPy
Supervised learning
+9 more

Senior Data Scientist 

We are seeking a Senior Data Scientist Engineer with experience bringing highly scalable enterprise SaaS applications to market. This is a uniquely impactful opportunity to help drive our business forward and directly contribute to long-term growth at Virtana. 

If you thrive in a fast-paced environment, take initiative, embrace proactivity and collaboration, and you’re seeking an environment for continuous learning and improvement, we’d love to hear from you! 

Virtana is a “remote first” work environment so you’ll be able to work from the comfort of your home while collaborating with teammates on a variety of connectivity tools and technologies. 


Work Location- Chennai


Job Type- Hybrid


Role Responsibilities: 

  • Research and test machine learning approaches for analyzing large-scale distributed computing applications. 
  • Develop production-ready implementations of proposed solutions across different models AI and ML algorithms, including testing on live customer data to improve accuracy, efficacy, and robustness 
  • Work closely with other functional teams to integrate implemented systems into the SaaS platform 
  • Suggest innovative and creative concepts and ideas that would improve the overall platform  


 Qualifications:  

The ideal candidate must have the following qualifications: 

  • 5 + years’ experience in practical implementation and deployment of large customer-facing ML based systems. 
  • MS or M Tech (preferred) in applied mathematics/statistics; CS or Engineering disciplines are acceptable but must have with strong quantitative and applied mathematical skills 
  • In-depth working, beyond coursework, familiarity with classical and current ML techniques, both supervised and unsupervised learning techniques and algorithms 
  • Implementation experiences and deep knowledge of Classification, Time Series Analysis, Pattern Recognition, Reinforcement Learning, Deep Learning, Dynamic Programming and Optimization 
  • Experience in working on modeling graph structures related to spatiotemporal systems 
  • Programming skills in Python is a must 
  • Experience in developing and deploying on cloud (AWS or Google or Azure) 
  • Good verbal and written communication skills 
  • Familiarity with well-known ML frameworks such as Pandas, Keras, TensorFlow 


About Virtana: 

Virtana delivers the industry’s only unified software multi-cloud management platform that allows organizations to monitor infrastructure, de-risk cloud migrations, and reduce cloud costs by 25% or more. 

 

Over 200 Global 2000 enterprise customers, such as AstraZeneca, Dell, Salesforce, Geico, Costco, Nasdaq, and Boeing, have valued Virtana’s software solutions for over a decade. 

 

Our modular platform for hybrid IT digital operations includes Infrastructure Performance Monitoring and Management (IPM), Artificial Intelligence for IT Operations (AIOps), Cloud Cost Management (Fin Ops), and Workload Placement Readiness Solutions. Virtana is simplifying the complexity of hybrid IT environments with a single cloud-agnostic platform across all the categories listed above. The $30B IT Operations Management (ITOM) Software market is ripe for disruption, and Virtana is uniquely positioned for success. 

 

Company Profitable Growth and Recognition 

In FY2023 (Fiscal year ending January 2023), Virtana earned: 

— Best CEO, Best CEO for Women, and Best CEO for Diversity by Comparably 

— Two years in a row YoY Profitable Annual Recurring Revenue (ARR) Growth 

— Two consecutive years of +EBITDA, 78% YoY EBITDA growth, or 20% of Revenue 

— Positive Cash Flow, 171% YoY cash flow growth 

 

Read more
Wama Technology

at Wama Technology

2 candid answers
HR Wama
Posted by HR Wama
Mumbai
3 - 10 yrs
₹6L - ₹10.8L / yr
skill iconReact.js
Fullstack Developer
skill iconNodeJS (Node.js)
skill iconLaravel
skill iconPython
+1 more

Location: Mumbai (Onsite)

Experience: 4–6 Years

Salary: ₹50,000 – ₹90,000 per month (depending on experience & skill set)

Employment Type: Full-time

Job Description

We are looking for a skilled React Developer to join our team in Mumbai. The ideal candidate should have strong hands-on experience in building modern, responsive web applications using React and be comfortable working with at least one backend technology such as Python, Node.js, or PHP.

Key Responsibilities

  • Develop and maintain user-friendly web applications using React.js
  • Convert UI/UX designs into high-quality, reusable components
  • Work with REST APIs and integrate frontend with backend services
  • Collaborate with backend developers (Python / Node.js / PHP)
  • Optimize applications for performance, scalability, and responsiveness
  • Manage application state using Redux / Context API / similar
  • Write clean, maintainable, and well-documented code
  • Participate in code reviews and sprint planning
  • Debug and resolve frontend and integration issues
  • Ensure cross-browser and cross-device compatibility

Required Skills & Qualifications

  • 4–6 years of experience in frontend development
  • Strong expertise in React.js
  • Proficiency in JavaScript (ES6+)
  • Experience with HTML5, CSS3, Responsive Design
  • Hands-on experience with RESTful APIs
  • Working knowledge of at least one backend technology:
  • Python (Django / Flask / FastAPI) OR
  • Node.js (Express / NestJS) OR
  • PHP (Laravel preferred)
  • Familiarity with Git / version control systems
  • Understanding of component-based architecture
  • Experience working in Linux environments

Good to Have (Preferred Skills)

  • Experience with Next.js
  • Knowledge of TypeScript
  • Familiarity with Redux / React Query
  • Basic understanding of databases (MySQL / MongoDB)
  • Experience with CI/CD pipelines
  • Exposure to AWS or cloud platforms
  • Experience working in Agile/Scrum teams

What We Offer

  • Competitive salary based on experience and skills
  • Onsite role with a collaborative team in Mumbai
  • Opportunity to work on modern tech stack and real-world projects
  • Career growth and learning opportunities

Interested candidates can share their resumes at


Job Type: Full-time

Application Question(s):

  • If selected, how soon can you join?
  • Are you okay with the salary slab (50,000-90,000) , depending upon your experience?
  • Have you worked on a production React application where you integrated REST APIs and handled authentication and error scenarios with a backend (Python / Node.js / PHP)?

Experience:

  • Total: 3 years (Required)
  • Python: 3 years (Required)

Location:

  • Mumbai, Maharashtra (Required)

Work Location: In person

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹50L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconPython
skill iconJava
Data engineering
+10 more

Job Title : Senior Software Engineer (Full Stack — AI/ML & Data Applications)

Experience : 5 to 10 Years

Location : Bengaluru, India

Employment Type : Full-Time | Onsite


Role Overview :

We are seeking a Senior Full Stack Software Engineer with strong technical leadership and hands-on expertise in AI/ML, data-centric applications, and scalable full-stack architectures.

In this role, you will design and implement complex applications integrating ML/AI models, lead full-cycle development, and mentor engineering teams.


Mandatory Skills :

Full Stack Development (React/Angular/Vue + Node.js/Python/Java), Data Engineering (Spark/Kafka/ETL), ML/AI Model Integration (TensorFlow/PyTorch/scikit-learn), Cloud & DevOps (AWS/GCP/Azure, Docker, Kubernetes, CI/CD), SQL/NoSQL Databases (PostgreSQL/MongoDB).


Key Responsibilities :

  • Architect, design, and develop scalable full-stack applications for data and AI-driven products.
  • Build and optimize data ingestion, processing, and pipeline frameworks for large datasets.
  • Deploy, integrate, and scale ML/AI models in production environments.
  • Drive system design, architecture discussions, and API/interface standards.
  • Ensure engineering best practices across code quality, testing, performance, and security.
  • Mentor and guide junior developers through reviews and technical decision-making.
  • Collaborate cross-functionally with product, design, and data teams to align solutions with business needs.
  • Monitor, diagnose, and optimize performance issues across the application stack.
  • Maintain comprehensive technical documentation for scalability and knowledge-sharing.

Required Skills & Experience :

  • Education : B.E./B.Tech/M.E./M.Tech in Computer Science, Data Science, or equivalent fields.
  • Experience : 5+ years in software development with at least 2+ years in a senior or lead role.
  • Full Stack Proficiency :
  • Front-end : React / Angular / Vue.js
  • Back-end : Node.js / Python / Java
  • Data Engineering : Experience with data frameworks such as Apache Spark, Kafka, and ETL pipeline development.
  • AI/ML Expertise : Practical exposure to TensorFlow, PyTorch, or scikit-learn and deploying ML models at scale.
  • Databases : Strong knowledge of SQL & NoSQL systems (PostgreSQL, MongoDB) and warehousing tools (Snowflake, BigQuery).
  • Cloud & DevOps : Working knowledge of AWS, GCP, or Azure; containerization & orchestration (Docker, Kubernetes); CI/CD; MLflow/SageMaker is a plus.
  • Visualization : Familiarity with modern data visualization tools (D3.js, Tableau, Power BI).

Soft Skills :

  • Excellent communication and cross-functional collaboration skills.
  • Strong analytical mindset with structured problem-solving ability.
  • Self-driven with ownership mentality and adaptability in fast-paced environments.

Preferred Qualifications (Bonus) :

  • Experience deploying distributed, large-scale ML or data-driven platforms.
  • Understanding of data governance, privacy, and security compliance.
  • Exposure to domain-driven data/AI use cases in fintech, healthcare, retail, or e-commerce.
  • Experience working in Agile environments (Scrum/Kanban).
  • Active open-source contributions or a strong GitHub technical portfolio.
Read more
Tops Infosolutions
Ahmedabad
4 - 11 yrs
₹10L - ₹18L / yr
skill iconPython
AWS Lambda
skill iconPostgreSQL
Retrieval Augmented Generation (RAG)
AI Agents
+2 more

Job Description: Sr. Python Developer

Experience: 4+ Years

Job Location: Nr. Iskcon Mega Mall, SG Highway, Ahmedabad

Timings: 10 AM to 7 PM


Job Description:


Technical Skills :

  • Good knowledge of Python with 4+ years of minimum experience 
  • Strong understanding of various Python Libraries, APIs, and toolkits. 
  • Good experience in Django, Django REST Framework, and Flask framework.
  • Understanding of AWS Serverless implementation using Lambda and API Gateway
  • Hands-on Experience in Databases like Mysql, PostgreSQL.
  • Good experience/understanding in Agentic AI / RAG.
  • Proficient in NoSQL document databases especially MongoDB, Redis.
  • Stronghold in Data Structures and Algorithm
  • Thorough understanding of version control system concepts especially GIT.
  • Understanding of the whole web stack and how all the pieces fit together (front-end, database, network layer, etc.) and how they impact the performance of your application.
  • Excellent understanding of MVC and OOP. Bonus for the understanding of prevalent design patterns.
  • Excellent debugging and optimization skills


Job Responsibilities :

  • Building big, robust, scalable, and maintainable applications.
  • Debugging, Fixing bugs, Identifying Performance Issues, and Improving App Performance.
  • Continuously discover, evaluate, and implement new technologies to maximize development efficiency.
  • Handling complex technical issues related to web app development & discussing solutions with the team.
  • Developing, Deploying, and maintaining Multistage, Multi-tier applications.
  • To write high-performing code and will be participating in key architectural decisions.
  • Project Execution & Client Interaction
  • Scrum Implementation


Read more
Newpage Solutions

at Newpage Solutions

2 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
7yrs+
Upto ₹45L / yr (Varies
)
skill iconPython
Large Language Models (LLM)
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Generative AI
+11 more

Lead AI Engineer

Location: Bengaluru, Hybrid | Type: Full-time


About Newpage Solutions

Newpage Solutions is a global digital health innovation company helping people live longer, healthier lives. We partner with life sciences organisations—which include pharmaceutical, biotech, and healthcare leaders—to build transformative AI and data-driven technologies addressing real-world health challenges.

From strategy and research to UX design and agile development, we deliver and validate impactful solutions using lean, human-centered practices.

We are proud to be a ‘Great Place to Work®’ certified company for the last three consecutive years. We also hold a top Glassdoor rating and are named among the "Top 50 Most Promising Healthcare Solution Providers" by CIOReview.

As an organisation, we foster creativity, continuous learning, and inclusivity, creating an environment where bold ideas thrive and make a measurable difference in people’s lives.


Your Mission

We’re seeking a highly experienced, technically exceptional Lead AI Engineer to architect and deliver next-generation Generative AI and Agentic systems. You will drive end-to-end innovation, from model selection and orchestration design to scalable backend implementation, all while collaborating with cross-functional teams to transform AI research into production-ready solutions.

This is an individual-contributor leadership role for someone who thrives on ownership, fast execution, and technical excellence. You will define the standards for quality, scalability, and innovation across all AI initiatives.


What You’ll Do

Develop AI Applications & Agentic Systems

  • Architect, build, and optimise production-grade Generative AI and agentic applications using frameworks such as LangChain, LangGraph, LlamaIndex, Semantic Kernel, n8n, Pydantic AI or custom orchestration layers integrating with LLMs such as GPT, Claude, Gemini as well as self-hosted LLMs along with MCP integrations.
  • Implement Retrieval-Augmented Generation (RAG) techniques leveraging vector databases (Pinecone, ChromaDB, Weaviate, pgvector, etc.) and search engines such as ElasticSearch / Solr using both TF/IDF BM25-based full-text search and similarity search techniques.
  • Implement guardrails, observability, fine-tune and train models for industry or domain-specific use cases.
  • Build multi-modal workflows using text, image, voice, and video.
  • Design robust prompt & context engineering frameworks to improve accuracy, repeatability, quality, cost, and latency.
  • Build supporting microservices and modular backends using Python, JavaScript, or Java aligned with domain-driven design, SOLID principles, OOP, and clean architecture, using various databases including relational, document, Key-Value, Graph, and event-driven systems using Kafka / MSK, SQS, etc.
  • Deploy cloud-native applications in hyper-scalers such as AWS / GCP / Azure using containerisation and orchestration with Docker / Kubernetes or serverless architecture.
  • Apply industry best engineering practices: TDD, well-structured and clean code with linting, domain-driven design, security-first design (secrets management, rotation, SAST, DAST), comprehensive observability (structured logging, metrics, tracing), containerisation & orchestration (Docker, Kubernetes), automated CI/CD pipelines (GitHub Actions, Jenkins).

AI-Assisted Development, Context Engineering & Innovation

  • Use AI-assisted development tools such as Claude Code, GitHub Copilot, Codex, Roo Code, Cursor to accelerate development while maintaining code quality and maintainability.
  • Utilise coding assistant tools with native instructions, templates, guides, workflows, sub-agents, and more to create developer workflows that improve development velocity, standardisation, and reliability across AI teams.
  • Ensure industry best practices to develop well-structured code that is testable, maintainable, performant, scalable, and secure.
  • Partner with Product, Design, and ML teams to translate conceptual AI features into scalable user-facing products.
  • Provide technical mentorship and guide team members in system design, architecture reviews, and AI best practices.
  • Lead POCs, internal research experiments, and innovation sprints to explore and validate emerging AI techniques.

What You Bring

  • 7–12 years of total experience in software development, with at least 3 years in AI/ML systems engineering or Generative AI.
  • Experience with cloud-native deployments and services in AWS / GCP / Azure, with the ability to architect distributed systems.
  • A ‘no-compromise’ attitude with engineering best practices such as clean code, TDD, containerisation, security, CI/CD, scalability, performance, and cost optimisation.
  • Active user of AI-assisted development tools (Claude Code, GitHub Copilot, Cursor) with demonstrable experience using structured workflows and sub-agents.
  • A deep understanding of LLMs, context engineering approaches, and best practices, with the ability to optimise accuracy, latency, and cost.
  • Python or JavaScript experience with strong grasp of OOP, SOLID principles, 12-factor application development, and scalable microservice architecture.
  • Proven track record developing and deploying GenAI/LLM-based systems in production.
  • Advanced understanding of context engineering, prompt construction, optimisation, and evaluation techniques.
  • End-to-end implementation experience using vector databases and retrieval pipelines.
  • Experience with GitHub Actions, Docker, Kubernetes, and cloud-native deployments.
  • Obsession with clean code, system scalability, and performance optimisation.
  • Ability to balance rapid prototyping with long-term maintainability.
  • Excel at working independently while collaborating effectively across teams.
  • Stay ahead of the curve on new AI models, frameworks, and best practices.
  • Have a founder’s mindset and love solving ambiguous, high-impact technical challenges.
  • Bachelor’s or Master’s degree in Computer Science, Machine Learning, or a related technical discipline.

Bonus Skills / Experience

  • Understanding of MLOps, model serving, scaling, and monitoring workflows (e.g., BentoML, MLflow, Vertex AI, AWS Sagemaker).
  • Experience building streaming + batch data ingestion and transformation pipelines (Spark / Airflow / Beam).
  • Mobile and front-end web application development experience.

What We Offer

  • A people-first culture – Supportive peers, open communication, and a strong sense of belonging.
  • Smart, purposeful collaboration – Work with talented colleagues to create technologies that solve meaningful business challenges.
  • Balance that lasts – We respect your time and support a healthy integration of work and life.
  • Room to grow – Opportunities for learning, leadership, and career development, shaped around you.
  • Meaningful rewards – Competitive compensation that recognises both contribution and potential.
Read more
Octobotics Tech

at Octobotics Tech

2 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Noida
3yrs+
Best in industry
ROS
Internationalization and localization
Navigation
Nav2
SLAM
+2 more

Senior Robotics Engineer (ROS 2 Migration & Systems) - WorkFlow

Department: R&D Engineering

Location: Octobotics HQ (Noida/On-Site)


The Mission

To successfully migrate our legacy ROS 1 architecture to a high-performance ROS 2 Native ecosystem, architecting a navigation stack that survives the "unheard-of" edge cases of the real world.


1. The Context: The Great Migration

Octobotics is at a pivot point. Our legacy stack was built on ROS 1 (Noetic). It got us to MVP.

But to scale, we are tearing it down and rebuilding in ROS 2 (Humble/Iron).

We are not looking for someone to maintain old code. We are looking for an Architect to lead this migration. You will deal with the pain of bridging ros1_bridge, porting custom messages, and rewriting node lifecycles from scratch.

If you are afraid of breaking changes and complex dependency hell, stop reading now.


2. The "Scorecard" (Outcomes)

  • The Migration: Port our core navigation and control logic from ROS 1 to ROS 2. This involves rewriting nodes to utilize Lifecycle Management and Node Composition for zero-copy transfer.
  • Nav2 Architecture: We don't just "install" Nav2. You will write custom Behavior Tree plugins and Costmap layers to handle dynamic obstacles in unstructured environments.
  • Middleware Optimization: You will own the DDS layer (FastDDS/CycloneDDS). You must tune QoS profiles for lossy WiFi environments and debug discovery traffic issues that traditional network engineers don't understand.
  • Sensor Fusion & State Estimation: Implement and tune EKF/UKF pipelines (robot_localization) to fuse IMU, Wheel Odometry, and LiDAR. You must understand Covariance Matrices—if your covariance grows unbounded, you have failed.
  • Serialization Strategy: Implement Protocol Buffers (Protobuf) for high-efficiency, non-ROS internal data logging and inter-process communication where overhead must be zero.

3. Technical Requirements (The Hard Skills)

The Stack (ROS 1 & ROS 2):

  • Deep ROS 2 Mastery: You know the difference between spin(), spin_some(), and Multi-Threaded Executors. You understand why we are moving to ROS 2 (Real-time constraints, DDS security, QoS).
  • Navigation Stack: In-depth knowledge of Nav2 (Planners, Controllers, Recoveries). You understand Global vs. Local planners (A*, DWB, TEB).
  • SLAM & Localization: Experience with Graph-based SLAM (Cartographer, SLAM Toolbox). You know how to close loops and optimize pose graphs.

The Math (The "Weeder"):

  • Linear Algebra & Geometry: Rigid body transformations are your second language. You understand Quaternions, homogeneous transformation matrices ($T \in SE(3)$), and how to avoid Gimbal Lock.
  • Kinematics: You can derive Forward and Inverse Kinematics for Differential Drive and Ackermann steering chassis.
  • Probabilistic Robotics: Understanding of Bayesian estimation. You know that sensors are noisy and that "Ground Truth" is a myth.

The Code:

  • C++ (14/17): Real-time safe coding standards. RAII, Smart Pointers, and template metaprogramming.
  • Python: For prototyping and complex orchestration.

4. The "Topgrading" Filter (Do NOT apply if...)

  • You think roslaunch is the same as ros2 launch.
  • You have never defined a custom .msg or .srv file.
  • You struggle to visualize a TF tree in your head (map -> odom -> base_link).
  • You think latency "doesn't matter" in a control loop.

5. The Challenge: Surfing the Tsunami

Let’s be honest: AMR (Autonomous Mobile Robots) is hard.

We are solving problems that are unheard of in the standard "warehouse" world. We deal with dynamic crowds, changing lighting, and network black holes.

There will be days when the Sensor Fusion drifts for no reason. There will be days when the DDS discovery fails because of a multicast storm. There will be architectural "Tsunamis" that threaten to wipe out our sprint.


We are looking for the engineer who doesn't run for higher ground, but grabs a board and says, "I’m ready to surf."

Read more
Poshmark

at Poshmark

3 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Chennai
7yrs+
Upto ₹35L / yr (Varies
)
Test Automation (QA)
Software Testing (QA)
skill iconPython
skill iconJava
Appium
+5 more

About Poshmark

Poshmark is a leading fashion resale marketplace powered by a vibrant, highly engaged community of buyers and sellers and real-time social experiences. Designed to make online selling fun, more social and easier than ever, Poshmark empowers its sellers to turn their closet into a thriving business and share their style with the world. Since its founding in 2011, Poshmark has grown its community to over 130 million users and generated over $10 billion in GMV, helping sellers realize billions in earnings, delighting buyers with deals and one-of-a-kind items, and building a more sustainable future for fashion. For more information, please visit www.poshmark.com, and for company news, visit newsroom.poshmark.com.


About the role

We are looking for a Lead Software Development Engineer In Test (Lead SDET) who will define, design, and drive the automation and quality engineering strategy across Poshmark. You will take a hands-on leadership role in building scalable test automation frameworks and infrastructure while partnering closely with Engineering, Product, and QA Engineering teams.

You will have a significant impact on the quality of Poshmark’s growing products and services by creating the next generation of tools and frameworks that enable faster development, better testability, and higher confidence releases. You will influence software design, promote strong engineering practices.


Responsibilities

  • Test Harnesses and Infrastructure: Design, implement, and maintain scalable test harnesses and testing infrastructure for web, mobile (iOS & Android), and APIs. Own the long-term architecture and evolution of automation frameworks. Leverages AI-assisted tools to improve test design, increase automation coverage, and reduce manual effort across web, mobile, and API testing.
  • Automation Framework Leadership: Lead the design, enhancement, and optimization of automation frameworks using tools such as Selenium, Appium, Postman, and AI-driven solutions. Ensure frameworks are reliable, maintainable, and scalable, while establishing and enforcing strong coding and testing standards through regular code reviews.
  • Product Quality and Engineering Partnership: Actively monitor product development and usage to identify quality gaps and risks. Partner with developers to improve testability, prevent defects, and integrate testing early in the development lifecycle. Embed automation into CI/CD pipelines to enable continuous testing and rapid feedback
  • Metrics, Reporting, and Continuous Improvement: Define and track quality metrics such as automation coverage, defect trends, and execution stability. Create automated reporting solutions to support data-driven quality decisions. Continuously improve testing processes, tools, and workflows across teams
  • Leadership and Mentorship: Mentor and guide the team on automation best practices and framework design, while translating complex initiatives into clear, actionable goals that enable effective execution.


6-Month Accomplishments

  • Stabilize and enhance existing automation frameworks to improve reliability and execution consistency
  • Uses AI-driven insights to continuously optimize test coverage, execution efficiency, and defect detection effectiveness.
  • Collaborate with development teams to add new capabilities to the automation framework
  • Ensure regular, stable execution of regression tests across multiple platforms
  • Identify and resolve high-impact quality issues early in the development cycle


12+ Month Accomplishments

  • Establish a scalable, future-ready automation architecture deeply integrated with CI/CD pipelines
  • Guide the team in adopting AI-assisted testing practices that complement existing automation frameworks and quality processes.
  • Lead efforts to optimize test execution time through parallelization and smarter automation strategies
  • Mentor and grow a high-performing automation team with clear technical standards
  • Drive measurable improvements in product quality and reduction of production defects


Qualifications

  • 7+ years of experience in software testing with a strong focus on test automation
  • 4+ years of hands-on programming experience using languages such as Python or Java
  • Proven experience designing and scaling automation systems for web, mobile, and APIs
  • Strong expertise with frameworks and tools such as Appium, Selenium, WebDriver, and CI/CD systems
  • Experience across all phases of automation including GUI and integration.
  • Hands-on experience with Jira, Confluence, GitHub, Unix commands, and Jenkins
  • Excellent communication, problem-solving, and technical leadership skills
Read more
Corporate Web Solutions
Remote only
0 - 1 yrs
₹1L - ₹2L / yr
skill iconPython
skill iconData Science

About the Job :

As a Data Science Intern, you will work closely with our experienced data scientists and analysts to extract, analyze, and interpret large datasets to drive strategic decisions and improve our products and services. You will gain hands-on experience with data science tools, machine learning models, and statistical methods to help solve complex problems.


Currently offering "Data Science Internship" for 1-6months.


Data Science Projects details In which Intern’s Will Work :

Project 01 : Image Caption Generator Project in Python

Project 02 : Credit Card Fraud Detection Project

Project 03 : Movie Recommendation System

Project 04 : Customer Segmentation

Project 05 : Brain Tumor Detection with Data Science


Eligibility


A PC or Laptop with decent internet speed.

Good understanding of English language.

Any Graduate with a desire to become a web developer. Freshers are welcomed.

Knowledge of HTML, CSS and JavaScript is a plus but NOT mandatory.

Fresher are welcomed. You will get proper training also, so don't hesitate to apply if you don't have any coding background.


Duration: 2 Months (with the possibility of extending up to 6 months)

MODE: Work From Home (Online)


Key Responsibilities:


Assist in collecting, cleaning, and preprocessing data from various sources.

Perform exploratory data analysis to identify trends, patterns, and anomalies.

Develop and implement machine learning models and algorithms.

Create data visualizations and reports to communicate findings to stakeholders.

Collaborate with team members on data-driven projects and research.

Participate in meetings and contribute to discussions on project progress and strategy.


Benefits


Internship Certificate

Letter of recommendation

Stipend Performance Based

Part time work from home (2-3 Hrs per day)

5 days a week, Fully Flexible Shift

Read more
Wohlig Transformations Pvt Ltd
Apoorva Lakshkar
Posted by Apoorva Lakshkar
Mumbai
5 - 8 yrs
₹10L - ₹15L / yr
skill iconNodeJS (Node.js)
skill iconPython
RabbitMQ
skill iconPostgreSQL
BigQuery
+5 more


About Allvest :


- AI-driven financial planning and portfolio management platform

- Secure, data-backed portfolio oversight aligned with regulatory standards

- Building cutting-edge fintech solutions for intelligent investment decisions


Role Overview :


- Architect and build scalable, high-performance backend systems

- Work on mission-critical systems handling real-time market data and portfolio analytics

- Ensure regulatory compliance and secure financial transactions


Key Responsibilities :


- Design, develop, and maintain robust backend services and APIs using NodeJS and Python

- Build event-driven architectures using RabbitMQ and Kafka for real-time data processing

- Develop data pipelines integrating PostgreSQL and BigQuery for analytics and warehousing

- Ensure system reliability, performance, and security with focus on low-latency operations

- Lead technical design discussions, code reviews, and mentor junior developers

- Optimize database queries, implement caching strategies, and enhance system performance

- Collaborate with cross-functional teams to deliver end-to-end features

- Implement monitoring, logging, and observability solutions


Required Skills & Experience :


- 5+ years of professional backend development experience

- Strong expertise in NodeJS and Python for production-grade applications

- Proven experience designing RESTful APIs and microservices architectures

- Strong proficiency in PostgreSQL including query optimization and database design

- Hands-on experience with RabbitMQ and Kafka for event-driven systems

- Experience with BigQuery or similar data warehousing solutions

- Solid understanding of distributed systems, scalability patterns, and high-traffic applications

- Strong knowledge of authentication, authorization, and security best practices in financial applications

- Experience with Git, CI/CD pipelines, and modern development workflows

- Excellent problem-solving and debugging skills across distributed systems


Preferred Qualifications :


- Prior experience in fintech, banking, or financial services

- Familiarity with cloud platforms (GCP/AWS/Azure) and containerization (Docker, Kubernetes)

- Knowledge of frontend technologies for full-stack collaboration

- Experience with Redis or Memcached

- Understanding of regulatory requirements (KYC, compliance, data privacy)

- Open-source contributions or tech community participation


What We Offer :


- Opportunity to work on cutting-edge fintech platform with modern technology stack

- Collaborative environment with experienced team from leading financial institutions

- Competitive compensation with equity participation

- Challenging problems at the intersection of finance, AI, and technology

- Career growth in fast-growing startup environment


Location: Mumbai (Phoenix Market City, Kurla West)


Also Apply at https://wohlig.keka.com/careers/jobdetails/122768



Read more
Heaven Designs

at Heaven Designs

1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
2yrs+
Upto ₹12L / yr (Varies
)
skill iconPython
skill iconDjango
RESTful APIs
DevOps
CI/CD
+8 more

Backend Engineer (Python / Django + DevOps)


Company: SurgePV (A product by Heaven Designs Pvt. Ltd.)


About SurgePV

SurgePV is an AI-first solar design software built from more than a decade of hands-on experience designing and engineering thousands of solar installations at Heaven Designs. After working with nearly every solar design tool in the market, we identified major gaps in speed, usability, and intelligence—particularly for rooftop solar EPCs.

Our vision is to build the most powerful and intuitive solar design platform for rooftop installers, covering fast PV layouts, code-compliant engineering, pricing, proposals, and financing in a single workflow. SurgePV enables small and mid-sized solar EPCs to design more systems, close more deals, and accelerate the clean energy transition globally.

As SurgePV scales, we are building a robust backend platform to support complex geometry, pricing logic, compliance rules, and workflow automation at scale.


Role Overview

We are seeking a Backend Engineer (Python / Django + DevOps) to own and scale SurgePV’s core backend systems. You will be responsible for designing, building, and maintaining reliable, secure, and high-performance services that power our solar design platform.

This role requires strong ownership—you will work closely with the founders, frontend engineers, and product team to make architectural decisions and ensure the platform remains fast, observable, and scalable as global usage grows.


Key Responsibilities

  • Design, develop, and maintain backend services and REST APIs that power PV design, pricing, and core product workflows.
  • Collaborate with the founding team on system architecture, including authentication, authorization, billing, permissions, integrations, and multi-tenant design.
  • Build secure, scalable, and observable systems with structured logging, metrics, alerts, and rate limiting.
  • Own DevOps responsibilities for backend services, including Docker-based containerization, CI/CD pipelines, and production deployments.
  • Optimize PostgreSQL schemas, migrations, indexes, and queries for computation-heavy and geospatial workloads.
  • Implement caching strategies and performance optimizations where required.
  • Integrate with third-party APIs such as CRMs, financing providers, mapping platforms, and satellite or irradiance data services.
  • Write clean, maintainable, well-tested code and actively participate in code reviews to uphold engineering quality.

Required Skills & Qualifications (Must-Have)

  • 2–5 years of experience as a Backend Engineer.
  • Strong proficiency in Python and Django / Django REST Framework.
  • Solid computer science fundamentals, including data structures, algorithms, and basic distributed systems concepts.
  • Proven experience designing and maintaining REST APIs in production environments.
  • Hands-on DevOps experience, including:
  • Docker and containerized services
  • CI/CD pipelines (GitHub Actions, GitLab CI, or similar)
  • Deployments on cloud platforms such as AWS, GCP, Azure, or DigitalOcean
  • Strong working knowledge of PostgreSQL, including schema design, migrations, indexing, and query optimization.
  • Strong debugging skills and a habit of instrumenting systems using logs, metrics, and alerts.
  • Ownership mindset with the ability to take systems from spec → implementation → production → iteration.

Good-to-Have Skills

  • Experience working in early-stage startups or building 0→1 products.
  • Familiarity with Kubernetes or other container orchestration tools.
  • Experience with Infrastructure as Code (Terraform, Pulumi).
  • Exposure to monitoring and observability stacks such as Prometheus, Grafana, ELK, or similar tools.
  • Prior exposure to solar, CAD/geometry, geospatial data, or financial/pricing workflows.

What We Offer

  • Real-world impact: every feature you ship helps accelerate solar adoption on real rooftops.
  • Opportunity to work across backend engineering, DevOps, integrations, and performance optimization.
  • A mission-driven, fast-growing product focused on sustainability and clean energy.
Read more
MyOperator - VoiceTree Technologies

at MyOperator - VoiceTree Technologies

1 video
3 recruiters
Vijay Muthu
Posted by Vijay Muthu
Remote only
3 - 5 yrs
₹8L - ₹12L / yr
skill iconKubernetes
skill iconAmazon Web Services (AWS)
Amazon EC2
AWS RDS
AWS opensearch
+22 more

About MyOperator

MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.


Job Summary

We are looking for a skilled and motivated DevOps Engineer with 3+ years of hands-on experience in AWS cloud infrastructure, CI/CD automation, and Kubernetes-based deployments. The ideal candidate will have strong expertise in Infrastructure as Code, containerization, monitoring, and automation, and will play a key role in ensuring high availability, scalability, and security of production systems.


Key Responsibilities

  • Design, deploy, manage, and maintain AWS cloud infrastructure, including EC2, RDS, OpenSearch, VPC, S3, ALB, API Gateway, Lambda, SNS, and SQS.
  • Build, manage, and operate Kubernetes (EKS) clusters and containerized workloads.
  • Containerize applications using Docker and manage deployments with Helm charts
  • Develop and maintain CI/CD pipelines using Jenkins for automated build and deployment processes
  • Provision and manage infrastructure using Terraform (Infrastructure as Code)
  • Implement and manage monitoring, logging, and alerting solutions using Prometheus and Grafana
  • Write and maintain Python scripts for automation, monitoring, and operational tasks
  • Ensure high availability, scalability, performance, and cost optimization of cloud resources
  • Implement and follow security best practices across AWS and Kubernetes environments
  • Troubleshoot production issues, perform root cause analysis, and support incident resolution
  • Collaborate closely with development and QA teams to streamline deployment and release processes

Required Skills & Qualifications

  • 3+ years of hands-on experience as a DevOps Engineer or Cloud Engineer.
  • Strong experience with AWS services, including:
  • EC2, RDS, OpenSearch, VPC, S3
  • Application Load Balancer (ALB), API Gateway, Lambda
  • SNS and SQS.
  • Hands-on experience with AWS EKS (Kubernetes)
  • Strong knowledge of Docker and Helm charts
  • Experience with Terraform for infrastructure provisioning and management
  • Solid experience building and managing CI/CD pipelines using Jenkins
  • Practical experience with Prometheus and Grafana for monitoring and alerting
  • Proficiency in Python scripting for automation and operational tasks
  • Good understanding of Linux systems, networking concepts, and cloud security
  • Strong problem-solving and troubleshooting skills

Good to Have (Preferred Skills)

  • Exposure to GitOps practices
  • Experience managing multi-environment setups (Dev, QA, UAT, Production)
  • Knowledge of cloud cost optimization techniques
  • Understanding of Kubernetes security best practices
  • Experience with log aggregation tools (e.g., ELK/OpenSearch stack)

Language Preference

  • Fluency in English is mandatory.
  • Fluency in Hindi is preferred.
Read more
OpsTree Solutions

at OpsTree Solutions

4 candid answers
1 recruiter
Nikita Sinha
Posted by Nikita Sinha
Mumbai
3 - 4 yrs
Upto ₹13L / yr (Varies
)
skill iconPython
Google Cloud Platform (GCP)
skill iconKubernetes
CI/CD

Key Responsibilities

  • Automation & Reliability: Automate infrastructure and operational processes to ensure high reliability, scalability, and security.
  • Cloud Infrastructure Design: Gather GCP infrastructure requirements, evaluate solution options, and implement best-fit cloud architectures.
  • Infrastructure as Code (IaC): Design, develop, and maintain infrastructure using Terraform and Ansible.
  • CI/CD Ownership: Build, manage, and maintain robust CI/CD pipelines using Jenkins, ensuring system reliability and performance.
  • Container Orchestration: Manage Docker containers and self-managed Kubernetes clusters across multiple cloud environments.
  • Monitoring & Observability: Implement and manage cloud-native monitoring solutions using Prometheus, Grafana, and the ELK stack.
  • Proactive Issue Resolution: Troubleshoot and resolve infrastructure and application issues across development, testing, and production environments.
  • Scripting & Automation: Develop efficient automation scripts using Python and one or more of Node.js, Go, or Shell scripting.
  • Security Best Practices: Maintain and enhance the security of cloud services, Kubernetes clusters, and deployment pipelines.
  • Cross-functional Collaboration: Work closely with engineering, product, and security teams to design and deploy secure, scalable infrastructure.


Read more
LogIQ Labs Pvt.Ltd.

at LogIQ Labs Pvt.Ltd.

2 recruiters
HR eShipz
Posted by HR eShipz
Bengaluru (Bangalore)
3 - 4 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconPostman
API
SQL

Company Description


eShipz is a rapidly expanding logistics automation platform designed to optimize shipping operations and enhance post-purchase customer experiences. The platform offers solutions such as multi-carrier integrations, real-time tracking, NDR management, returns, freight audits, and more. Trusted by over 350 businesses, eShipz provides easy-to-use analytics, automated shipping processes, and reliable customer support. As a trusted partner for eCommerce businesses and enterprises, eShipz delivers smarter, more efficient shipping solutions. Visit www.eshipz.com for more information.


Role Description



The Python Support Engineer role at eShipz requires supporting clients by providing technical solutions and resolving issues related to the platform. Responsibilities include troubleshooting reported problems, delivering technical support in a professional manner, and assisting with software functionality and operating systems. The engineer will also collaborate with internal teams to ensure a seamless customer experience. This is a full-time on-site role located in Sanjay Nagar, Greater Bengaluru Area.


Qualifications

  • Strong proficiency in Troubleshooting and Technical Support skills to identify and address software or technical challenges effectively.
  • Capability to provide professional Customer Support and Customer Service, ensuring high customer satisfaction and resolving inquiries promptly.
  • Proficiency and knowledge of Operating Systems to diagnose and resolve platform-specific issues efficiently.
  • Excellent problem-solving, communication, and interpersonal skills.
  • Bachelor's degree in computer science, IT, or a related field.
  • Experience working with Python and an understanding of backend systems is a plus.


  • Technical Skill:
  • Python Proficiency: Strong understanding of core Python (Data structures, decorators, generators, and exception handling).
  • Frameworks: Familiarity with web frameworks like Django, Flask, or FastAPI.
  • Databases: Proficiency in SQL (PostgreSQL/MySQL) and understanding of ORMs like SQLAlchemy or Django ORM.
  • Infrastructure: Basic knowledge of Linux/Unix commands, Docker, and CI/CD pipelines (Jenkins/GitHub Actions).
  • Version Control: Comfortable using Git for branching, merging, and pull requests.


  • Soft Skill:
  • Analytical Thinking: A logical approach to solving complex, "needle-in-a-haystack" problems.
  • Communication: Ability to explain technical concepts to both developers and end-users.
  • Patience & Empathy: Managing high-pressure situations when critical systems are down.


  • Work Location: Sanjay Nagar, Bangalore (WFO)


  • Work Timing :

  • Mon - Fri (WFO)(9:45 A.M. - 6: 15 P.M.)
  • 1st & 3rd SAT (WFO)(9:00 A.M. - 2:00 P.M.)
  • 2nd & 4th SAT (WFH)(9:00 A.M. - 2:00 P.M.)



Read more
Hashone Careers
Pune
4 - 8 yrs
₹15L - ₹24L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconReact.js

Job Description

We are seeking a highly skilled Sr. Fullstack Web Developer with a passion for crafting exceptional web experiences and robust backend systems. This role demands a deep understanding of React, Node.js, and modern cloud ecosystems like AWS, combined with a commitment to best practices and continuous improvement.


As part of our team, you will work closely with cross-functional teams to design, build, and maintain high-performance web applications and scalable backend solutions for our clients. The ideal candidate is a team player with a growth mindset, a passion for excellence, and the ability to energize and inspire those around them.


Key Responsibilities

Frontend Development

Design, develop, and maintain scalable and responsive web applications using React.

Implement user-friendly and accessible UI/UX designs, ensuring cross-browser compatibility and performance optimization.

Leverage modern frontend tools and frameworks to deliver high-quality and maintainable code.

Backend Engineering

Build and maintain scalable and secure backend systems using Python or Node.js and cloud platforms like AWS.

Design and implement RESTful & GraphQL APIs and server-side logic to power web applications.

Work with SQL and NoSQL databases to create efficient and scalable data solutions.


CI/CD and Automation

Set up and manage CI/CD pipelines to streamline development, testing, and deployment processes.

Automate repetitive tasks and workflows to improve team productivity and code reliability.

Programming Best Practices

Write clean, maintainable, and well-documented code adhering to best programming practices.

Conduct code reviews, implement test-driven development (TDD), and ensure high-quality software delivery.

Collaboration and Communication

Collaborate closely with designers, product managers, and other developers to deliver features that align with project goals and client expectations.

Communicate effectively with team members and stakeholders to ensure alignment and clarity throughout the development process.

Continuous Learning and Innovation

Stay up-to-date with the latest trends and technologies in web development, and actively contribute to process and technology improvements.

Explore and integrate modern tools, including AI-driven copilots, to enhance development workflows and efficiency.

Soft Skills and Leadership

Exhibit a team-first attitude, contributing to a positive and collaborative team culture.

Provide mentorship to junior developers, sharing knowledge and fostering a culture of continuous learning and growth.



Requirements


Education: Bachelor’s degree in Computer Science, Engineering, or a related field.

Experience: 4+ years of professional experience as a Fullstack Developer, with at least 3 years specialising in React for web development.

Proven expertise in building and maintaining scalable web applications in a services business environment.

Strong proficiency in Python or Node.js and familiarity with modern cloud platforms such as AWS.

Hands-on experience working with both SQL and NoSQL databases for designing and optimising data structures.

Expertise in setting up and managing CI/CD pipelines, as well as automating workflows.

Deep understanding of frontend and backend development best practices, including clean code principles, test-driven development (TDD), and code reviews.

Familiarity with tools like JIRA, Git, and DevOps practices to support efficient development workflows.

Strong problem-solving skills and the ability to troubleshoot complex technical challenges.

Excellent communication and collaboration skills, with experience working in cross-functional teams.

Demonstrated commitment to continuous learning and staying updated on emerging technologies and frameworks.

A team player with a growth mindset, passion for software craftsmanship, and attention to detail.

Prior experience working with a services company is a must.



Benefits

Mentorship: Work next to some of the best engineers and designers.

Freedom: An environment where you get to practice your craft. No micromanagement.

Comprehensive healthcare: Healthcare for you and your family.

Growth: A tailor-made program to help you achieve your career goals. Click here to read more about our career ladder.

A voice that is heard: We don't claim to know the best way of doing things. We like to listen to ideas from our team.

Read more
Appiness Interactive Pvt. Ltd.
Bengaluru (Bangalore)
4 - 10 yrs
₹6L - ₹12L / yr
skill iconPython
skill iconDjango
skill iconReact.js
skill iconNextJs (Next.js)
skill iconPostgreSQL
+2 more

Location : Bengaluru, India

Type : Full-time

Experience :4-7 Years

Mode :Hybrid


The Role

We're looking for a Full Stack Engineer who thrives on building high-performance applications at scale. You'll work across our entire stack—from optimizing PostgreSQL queries on 140M+ records to crafting intuitive React interfaces. This is a high-impact role where your code directly influences how sales teams discover and engage with prospects worldwide.

What You'll Do

  • Build and optimize REST APIs using Django REST Framework handling millions of records
  • Design and implement complex database queries, indexes, and caching strategies for PostgreSQL
  • Develop responsive, high-performance front-end interfaces with Next.js and React
  • Implement Redis caching layers and optimize query performance for sub-second response times
  • Design and implement smart search/filter systems with complex logic
  • Collaborate on data pipeline architecture for processing large datasets
  • Write clean, testable code with comprehensive unit and integration tests
  • Participate in code reviews, architecture discussions, and technical planning

Required Skills

  • 4-7 years of professional experience in full stack development
  • Strong proficiency in Python and Django/Django REST Framework
  • Expert-level PostgreSQL knowledge: query optimization, indexing, EXPLAIN ANALYZE, partitioning
  • Solid experience with Next.js, React, and modern JavaScript/TypeScript
  • Experience with state management (Zustand, Redux, or similar)
  • Working knowledge of Redis for caching and session management
  • Familiarity with AWS services (RDS, EC2, S3, CloudFront)
  • Understanding of RESTful API design principles and best practices
  • Experience with Git, CI/CD pipelines, and agile development workflows

Nice to Have

  • Experience with Elasticsearch for full-text search at scale
  • Knowledge of data scraping, ETL pipelines, or data enrichment
  • Experience with Celery for async task processing
  • Familiarity with Tailwind CSS and modern UI/UX practices
  • Previous work on B2B SaaS or data-intensive applications
  • Understanding of security best practices and anti-scraping measures


Our Tech Stack

Backend

Python, Django REST Framework

Frontend

Next.js, React, Zustand, Tailwind CSS

Database

PostgreSQL 17, Redis

Infrastructure

AWS (RDS, EC2, S3, CloudFront), Docker

Tools

GitHub, pgBouncer


Why Join Us

  • Work on a product processing 140M+ records—real scale, real challenges
  • Direct impact on product direction and technical decisions
  • Modern tech stack with room to experiment and innovate
  • Collaborative team environment with a focus on growth
  • Competitive compensation and flexible hybrid work model


Read more
Appsforbharat
Pooja V
Posted by Pooja V
Bengaluru (Bangalore)
6 - 13 yrs
₹30L - ₹40L / yr
skill iconGo Programming (Golang)
skill iconPython
skill iconAmazon Web Services (AWS)
SQL

About the role


We are seeking a seasoned Backend Tech Lead with deep expertise in Golang and Python to lead our backend team. The ideal candidate has 6+ years of experience in backend technologies and 2–3 years of proven engineering mentoring experience, having successfully scaled systems and shipped B2C applications in collaboration with product teams.

Responsibilities

Technical & Product Delivery

● Oversee design and development of backend systems operating at 10K+ RPM scale.

● Guide the team in building transactional systems (payments, orders, etc.) and behavioral systems (analytics, personalization, engagement tracking).

● Partner with product managers to scope, prioritize, and release B2C product features and applications.

● Ensure architectural best practices, high-quality code standards, and robust testing practices.

● Own delivery of projects end-to-end with a focus on scalability, reliability, and business impact.

Operational Excellence

● Champion observability, monitoring, and reliability across backend services.

● Continuously improve system performance, scalability, and resilience.

● Streamline development workflows and engineering processes for speed and quality.

Requirements

Experience:

7+ years of professional experience in backend technologies.

2-3 years as Tech lead and driving delivery.

● Technical Skills:

Strong hands-on expertise in Golang and Python.

Proven track record with high-scale systems (≥10K RPM).

Solid understanding of distributed systems, APIs, SQL/NoSQL databases, and cloud platforms.

Leadership Skills:

Demonstrated success in managing teams through 2–3 appraisal cycles.

Strong experience working with product managers to deliver consumer-facing applications.

● Excellent communication and stakeholder management abilities.

Nice-to-Have

● Familiarity with containerization and orchestration (Docker, Kubernetes).

● Experience with observability tools (Prometheus, Grafana, OpenTelemetry).

● Previous leadership experience in B2C product companies operating at scale.

What We Offer

● Opportunity to lead and shape a backend engineering team building at scale.

● A culture of ownership, innovation, and continuous learning.

● Competitive compensation, benefits, and career growth opportunities.

Read more
Aurum Analytica
Noida
3 - 5 yrs
₹5L - ₹8L / yr
skill iconPython
SQL
SQL server
pandas
RESTful APIs
+3 more

Job Summary

We are looking for a Marketing Data Engineering Specialist who can manage our real-estate

lead delivery pipelines, integrate APIs, automate data workflows, and support performance

marketing with accurate insights. The ideal candidate understands marketing funnels and has

strong skills in API integrations, data analysis, automation, and server deployments.

Key Responsibilities

 Manage inbound/outbound lead flows through APIs, webhooks, and sheet-based

integrations.

 Clean, validate, and automate datasets using Python, Excel, and ETL workflows.

 Analyse lead feedback (RNR, NT, QL, SV, Booking) and generate actionable insights.

 Build and maintain automated reporting dashboards.

 Deploy Python scripts/notebooks on Linux servers and monitor cron jobs/logs.

 Work closely with marketing, client servicing, and data teams to improve lead quality

and campaign performance.

Required Skills

 Python (Pandas, API requests), Advanced Excel, SQL

 REST APIs, JSON, authentication handling

 Linux server deployment (cron, logs)

 Data visualization tools (Excel, Google Looker Studio preferred)

 Strong understanding of performance marketing metrics and funnels

Qualifications

 Bachelor’s degree in Engineering/CS/Maths/Statistics/Marketing Analytics or related

field.

 Minimum 3 years of experience in marketing analytics, data engineering, or

marketing operations.

Preferred Traits

 Detail-oriented, analytical, strong problem-solver

 Ability to work in fast-paced environments

 Good communication and documentation skills

Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort