50+ Python Jobs in Delhi, NCR and Gurgaon | Python Job openings in Delhi, NCR and Gurgaon
Apply to 50+ Python Jobs in Delhi, NCR and Gurgaon on CutShort.io. Explore the latest Python Job opportunities across top companies like Google, Amazon & Adobe.
Review Criteria
- Strong Senior/Lead DevOps Engineer Profile
- 8+ years of hands-on experience in DevOps engineering, with a strong focus on AWS cloud infrastructure and services (EC2, VPC, EKS, RDS, Lambda, CloudFront, etc.).
- Must have strong system administration expertise (installation, tuning, troubleshooting, security hardening)
- Solid experience in CI/CD pipeline setup and automation using tools such as Jenkins, GitHub Actions, or similar
- Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or Ansible
- Must have strong database expertise across MongoDB and Snowflake (administration, performance optimization, integrations)
- Experience with monitoring and observability tools such as Prometheus, Grafana, ELK, CloudWatch, or Datadog
- Good exposure to containerization and orchestration using Docker and Kubernetes (EKS)
- Must be currently working in an AWS-based environment (AWS experience must be in the current organization)
- Its an IC role
Preferred
- Must be proficient in scripting languages (Bash, Python) for automation and operational tasks.
- Must have strong understanding of security best practices, IAM, WAF, and GuardDuty configurations.
- Exposure to DevSecOps and end-to-end automation of deployments, provisioning, and monitoring.
- Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
Role & Responsibilities
We are seeking a highly skilled Senior DevOps Engineer with 8+ years of hands-on experience in designing, automating, and optimizing cloud-native solutions on AWS. AWS and Linux expertise are mandatory. The ideal candidate will have strong experience across databases, automation, CI/CD, containers, and observability, with the ability to build and scale secure, reliable cloud environments.
Key Responsibilities:
Cloud & Infrastructure as Code (IaC)-
- Architect and manage AWS environments ensuring scalability, security, and high availability.
- Implement infrastructure automation using Terraform, CloudFormation, and Ansible.
- Configure VPC Peering, Transit Gateway, and PrivateLink/Connect for advanced networking.
CI/CD & Automation:
- Build and maintain CI/CD pipelines (Jenkins, GitHub, SonarQube, automated testing).
- Automate deployments, provisioning, and monitoring across environments.
Containers & Orchestration:
- Deploy and operate workloads on Docker and Kubernetes (EKS).
- Implement IAM Roles for Service Accounts (IRSA) for secure pod-level access.
- Optimize performance of containerized and microservices applications.
Monitoring & Reliability:
- Implement observability with Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Establish logging, alerting, and proactive monitoring for high availability.
Security & Compliance:
- Apply AWS security best practices including IAM, IRSA, SSO, and role-based access control.
- Manage WAF, Guard Duty, Inspector, and other AWS-native security tools.
- Configure VPNs, firewalls, and secure access policies and AWS organizations.
Databases & Analytics:
- Must have expertise in MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Manage data reliability, performance tuning, and cloud-native integrations.
- Experience with Apache Airflow and Spark.
Ideal Candidate
- 8+ years in DevOps engineering, with strong AWS Cloud expertise (EC2, VPC, TG, RDS, S3, IAM, EKS, EMR, SCP, MWAA, Lambda, CloudFront, SNS, SES etc.).
- Linux expertise is mandatory (system administration, tuning, troubleshooting, CIS hardening etc).
- Strong knowledge of databases: MongoDB, Snowflake, Aerospike, RDS, PostgreSQL, MySQL/MariaDB, and other RDBMS.
- Hands-on with Docker, Kubernetes (EKS), Terraform, CloudFormation, Ansible.
- Proven ability with CI/CD pipeline automation and DevSecOps practices.
- Practical experience with VPC Peering, Transit Gateway, WAF, Guard Duty, Inspector and advanced AWS networking and security tools.
- Expertise in observability tools: Prometheus, Grafana, ELK, CloudWatch, M/Monit, and Datadog.
- Strong scripting skills (Shell/bash, Python, or similar) for automation.
- Bachelor / Master’s degree
- Effective communication skills
Job Summary:
We are seeking a skilled and forward-thinking Cloud AI Professional to join our technology team. The ideal candidate will have expertise in designing, deploying, and managing artificial intelligence and machine learning solutions in cloud environments (AWS, Azure, or Google Cloud). You will work at the intersection of cloud computing and AI, helping to build scalable, secure, and high-performance AI-driven applications and services.
Key Responsibilities:
- Design, develop, and deploy AI/ML models in cloud environments (AWS, GCP, Azure).
- Build and manage end-to-end ML pipelines using cloud-native tools (e.g., SageMaker, Vertex AI, Azure ML).
- Collaborate with data scientists, engineers, and stakeholders to define AI use cases and deliver solutions.
- Automate model training, testing, and deployment using MLOps practices.
- Optimize performance and cost of AI/ML workloads in the cloud.
- Ensure security, compliance, and scalability of deployed AI services.
- Monitor model performance in production and retrain models as needed.
- Stay current with new developments in AI/ML and cloud technologies.
Required Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 3+ years of experience in AI/ML and cloud computing.
- Hands-on experience with cloud platforms (AWS, GCP, or Azure).
- Proficient in Python, TensorFlow, PyTorch, or similar frameworks.
- Strong understanding of MLOps tools and CI/CD for machine learning.
- Experience with containerization (Docker, Kubernetes).
- Familiarity with cloud-native data services (e.g., BigQuery, S3, Cosmos DB).
Preferred Qualifications:
- Certifications in Cloud (e.g., AWS Certified Machine Learning, Google Cloud Professional ML Engineer).
- Experience with generative AI, LLMs, or real-time inferencing.
- Knowledge of data governance and ethical AI practices.
- Experience with REST APIs and microservices architecture.
Soft Skills:
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration abilities.
- Ability to work in a fast-paced, agile environment.
DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.
About the Role
We're seeking a talented and versatile Full Stack Developer with a strong foundation in mobile app development to join our dynamic team. You'll play a pivotal role in designing, developing, and maintaining high-quality software applications across various platforms.
Responsibilities
- Full Stack Development: Design, develop, and implement both front-end and back-end components of web applications using modern technologies and frameworks.
- Mobile App Development: Develop native mobile applications for iOS and Android platforms using Swift and Kotlin, respectively.
- Cross-Platform Development: Explore and utilize cross-platform frameworks (e.g., React Native, Flutter) for efficient mobile app development.
- API Development: Create and maintain RESTful APIs for integration with front-end and mobile applications.
- Database Management: Work with databases (e.g., MySQL, PostgreSQL) to store and retrieve application data.
- Code Quality: Adhere to coding standards, best practices, and ensure code quality through regular code reviews.
- Collaboration: Collaborate effectively with designers, project managers, and other team members to deliver high-quality solutions.
Qualifications
- Bachelor's degree in Computer Science, Software Engineering, or a related field.
- Strong programming skills in [relevant programming languages, e.g., JavaScript, Python, Java, etc.].
- Experience with [relevant frameworks and technologies, e.g., React, Angular, Node.js, Swift, Kotlin, etc.].
- Understanding of software development methodologies (e.g., Agile, Waterfall).
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong communication and interpersonal skills.
Preferred Skills (Optional)
- Experience with cloud platforms (e.g., AWS, Azure, GCP).
- Knowledge of DevOps practices and tools.
- Experience with serverless architectures.
- Contributions to open-source projects.
What We Offer
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
- A collaborative and supportive work environment.
- A chance to work on cutting-edge projects.
DataHavn IT Solutions is a company that specializes in big data and cloud computing, artificial intelligence and machine learning, application development, and consulting services. We want to be in the frontrunner into anything to do with data and we have the required expertise to transform customer businesses by making right use of data.
About the Role:
As a Data Scientist specializing in Google Cloud, you will play a pivotal role in driving data-driven decision-making and innovation within our organization. You will leverage the power of Google Cloud's robust data analytics and machine learning tools to extract valuable insights from large datasets, develop predictive models, and optimize business processes.
Key Responsibilities:
- Data Ingestion and Preparation:
- Design and implement efficient data pipelines for ingesting, cleaning, and transforming data from various sources (e.g., databases, APIs, cloud storage) into Google Cloud Platform (GCP) data warehouses (BigQuery) or data lakes (Dataflow).
- Perform data quality assessments, handle missing values, and address inconsistencies to ensure data integrity.
- Exploratory Data Analysis (EDA):
- Conduct in-depth EDA to uncover patterns, trends, and anomalies within the data.
- Utilize visualization techniques (e.g., Tableau, Looker) to communicate findings effectively.
- Feature Engineering:
- Create relevant features from raw data to enhance model performance and interpretability.
- Explore techniques like feature selection, normalization, and dimensionality reduction.
- Model Development and Training:
- Develop and train predictive models using machine learning algorithms (e.g., linear regression, logistic regression, decision trees, random forests, neural networks) on GCP platforms like Vertex AI.
- Evaluate model performance using appropriate metrics and iterate on the modeling process.
- Model Deployment and Monitoring:
- Deploy trained models into production environments using GCP's ML tools and infrastructure.
- Monitor model performance over time, identify drift, and retrain models as needed.
- Collaboration and Communication:
- Work closely with data engineers, analysts, and business stakeholders to understand their requirements and translate them into data-driven solutions.
- Communicate findings and insights in a clear and concise manner, using visualizations and storytelling techniques.
Required Skills and Qualifications:
- Strong proficiency in Python or R programming languages.
- Experience with Google Cloud Platform (GCP) services such as BigQuery, Dataflow, Cloud Dataproc, and Vertex AI.
- Familiarity with machine learning algorithms and techniques.
- Knowledge of data visualization tools (e.g., Tableau, Looker).
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
- Strong communication and interpersonal skills.
Preferred Qualifications:
- Experience with cloud-native data technologies (e.g., Apache Spark, Kubernetes).
- Knowledge of distributed systems and scalable data architectures.
- Experience with natural language processing (NLP) or computer vision applications.
- Certifications in Google Cloud Platform or relevant machine learning frameworks.
About the Role:
We are seeking a talented Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Job Role: Optimization Engineer - C Programming
Experience: 3 to 8 Years
Location: Bangalore, Pune, Delhi
Were hiring an Optimization Engineer skilled in C Programming and Operations Research / Optimization to design and optimize algorithms solving complex business and engineering problems.
Key Responsibilities:
- Develop and maintain high-performance software using C.
- Build and implement optimization models (linear, integer, nonlinear).
- Collaborate with teams to deliver scalable, efficient solutions.
- Analyze and improve existing algorithms for performance and scalability.
Must-Have Skills:
- Expertise in C Programming and Operations Research / Optimization.
- Strong in data structures, algorithms, and memory management.
- Hands-on with tools like CPLEX, Gurobi, or COIN-OR.
- Python experience is an added advantage.
Preferred Skills:
- Knowledge of Python, C++, or Java.
- Familiarity with AMPL, GAMS, or solver APIs.
- Understanding of HPC, parallel computing, or multi-threading.
We are looking for a Technical Lead - GenAI with a strong foundation in Python, Data Analytics, Data Science or Data Engineering, system design, and practical experience in building and deploying Agentic Generative AI systems. The ideal candidate is passionate about solving complex problems using LLMs, understands the architecture of modern AI agent frameworks like LangChain/LangGraph, and can deliver scalable, cloud-native back-end services with a GenAI focus.
Key Responsibilities :
- Design and implement robust, scalable back-end systems for GenAI agent-based platforms.
- Work closely with AI researchers and front-end teams to integrate LLMs and agentic workflows into production services.
- Develop and maintain services using Python (FastAPI/Django/Flask), with best practices in modularity and performance.
- Leverage and extend frameworks like LangChain, LangGraph, and similar to orchestrate tool-augmented AI agents.
- Design and deploy systems in Azure Cloud, including usage of serverless functions, Kubernetes, and scalable data services.
- Build and maintain event-driven / streaming architectures using Kafka, Event Hubs, or other messaging frameworks.
- Implement inter-service communication using gRPC and REST.
- Contribute to architectural discussions, especially around distributed systems, data flow, and fault tolerance.
Required Skills & Qualifications :
- Strong hands-on back-end development experience in Python along with Data Analytics or Data Science.
- Strong track record on platforms like LeetCode or in real-world algorithmic/system problem-solving.
- Deep knowledge of at least one Python web framework (e.g., FastAPI, Flask, Django).
- Solid understanding of LangChain, LangGraph, or equivalent LLM agent orchestration tools.
- 2+ years of hands-on experience in Generative AI systems and LLM-based platforms.
- Proven experience with system architecture, distributed systems, and microservices.
- Strong familiarity with Any Cloud infrastructure and deployment practices.
- Should know about any Data Engineering or Analytics expertise (Preferred) e.g. Azure Data Factory, Snowflake, Databricks, ETL tools Talend, Informatica or Power BI, Tableau, Data modelling, Datawarehouse development.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are seeking a Site Reliability Engineer (SRE) with a minimum of 2 years of experience who is passionate about monitoring, observability, and ensuring system reliability. The ideal candidate will have strong expertise in Grafana, Prometheus, Opensearch, and AWS CloudWatch, with the ability to design insightful dashboards and proactively optimize system performance.
Key Responsibilities
- Design, develop, and maintain monitoring and alerting systems using Grafana, Prometheus, and AWS CloudWatch.
- Create and optimize dashboards to provide actionable insights into system and application performance.
- Collaborate with development and operations teams to ensure high availability and reliability of services.
- Proactively identify performance bottlenecks and drive improvements.
- Continuously explore and adopt new monitoring/observability tools and best practices.
Required Skills & Qualifications
- Minimum 2 years of experience in SRE, DevOps, or related roles.
- Hands-on expertise in Grafana, Prometheus, and AWS CloudWatch.
- Proven experience in dashboard creation, visualization, and alerting setup.
- Strong understanding of system monitoring, logging, and metrics collection.
- Excellent problem-solving and troubleshooting skills.
- Quick learner with a proactive attitude and adaptability to new technologies.
Good to Have (Optional)
- Experience with AWS services beyond CloudWatch.
- Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines.
- Scripting knowledge (Python, Bash, or similar).
Why Join Us
At MyOperator, you will play a key role in ensuring the reliability, scalability, and performance of systems that power AI-driven business communication for leading global brands. You’ll work in a fast-paced, innovation-driven environment where your expertise will directly impact thousands of businesses worldwide.

One of the reputed Client in India
Our Client is looking to hire Databricks Amin immediatly.
This is PAN-INDIA Bulk hiring
Minimum of 6-8+ years with Databricks, Pyspark/Python and AWS.
Must have AWS
Notice 15-30 days is preferred.
Share profiles at hr at etpspl dot com
Please refer/share our email to your friends/colleagues who are looking for job.
Frontend Architect
Experience: 6+ years
Location: Delhi / Gurgaon
Roles & Responsibilities:
- Design, develop, and maintain scalable applications using React.js and FastAPI/Node.js.
- Write clean, modular, and well-documented code in Python and JavaScript.
- Deploy and manage applications on AWS using ECS, ECR, EKS, S3, and CodePipeline.
- Build and maintain CI/CD pipelines to automate testing, deployment, and monitoring.
- Implement unit, integration, and end-to-end tests using frameworks like Swagger and Pytest.
- Ensure secure coding practices, including authentication and authorization.
- Collaborate with cross-functional teams and mentor junior developers.
Skills Required:
- Strong expertise in React.js and modern frontend development
- Experience with FastAPI and Node.js backend
- Proficient in Python and JavaScript
- Hands-on experience with AWS cloud services and containerization (Docker)
- Knowledge of CI/CD pipelines, automated testing, and secure coding practices
- Excellent problem-solving, communication, and leadership skills
Full-Stack Developer
Exp: 5+ years required
Night shift: 8 PM-5 AM/9PM-6 AM
Only Immediate Joinee Can Apply
We are seeking a mid-to-senior level Full-Stack Developer with a foundational understanding of software development, cloud services, and database management. In this role, you will contribute to both the front-end and back-end of our application. focusing on creating a seamless user experience, supported by robust and scalable cloud infrastructure.
Key Responsibilities
● Develop and maintain user-facing features using React.js and TypeScript.
● Write clean, efficient, and well-documented JavaScript/TypeScript code.
● Assist in managing and provisioning cloud infrastructure on AWS using Infrastructure as Code (IaC) principles.
● Contribute to the design, implementation, and maintenance of our databases.
● Collaborate with senior developers and product managers to deliver high-quality software.
● Troubleshoot and debug issues across the full stack.
● Participate in code reviews to maintain code quality and share knowledge.
Qualifications
● Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
● 5+ years of professional experience in web development.
● Proficiency in JavaScript and/or TypeScript.
● Proficiency in Golang and Python.
● Hands-on experience with the React.js library for building user interfaces.
● Familiarity with Infrastructure as Code (IaC) tools and concepts (e.g.(AWS CDK, Terraform, or CloudFormation).
● Basic understanding of AWS and its core services (e.g., S3, EC2, Lambda, DynamoDB).
● Experience with database management, including relational (e.g., PostgreSQL) or NoSQL (e.g., DynamoDB, MongoDB) databases.
● Strong problem-solving skills and a willingness to learn.
● Familiarity with modern front-end build pipelines and tools like Vite and Tailwind CSS.
● Knowledge of CI/CD pipelines and automated testing.
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform.
Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are looking for a Software Developer Intern (Zoho Ecosystem) to join our HR Tech and Automation team at MyOperator’s Noida office. This role is ideal for passionate coders who are eager to explore the Zoho platform and learn how to automate business workflows, integrate APIs, and build internal tools that enhance organizational efficiency.
You will work directly with our Zoho Developer and Engineering Operations team, gaining hands-on experience in Deluge scripting, API integrations, and system automation within one of the fastest-growing SaaS environments.
Key Responsibilities
- Develop and test API integrations between Zoho applications and third-party platforms.
- Learn and apply Deluge scripting (Zoho’s proprietary language) to automate workflows.
- Assist in creating custom functions, dashboards, and workflow logic within Zoho apps.
- Debug and document automation setups to ensure smooth internal operations.
- Collaborate with HR Tech and cross-functional teams to bring automation ideas to life.
- Support ongoing enhancement and optimization of existing Zoho systems.
Required Skills & Qualifications
- Strong understanding of at least one programming language (JavaScript or Python).
- Basic knowledge of APIs, JSON, and REST.
- Logical and analytical problem-solving mindset.
- Eagerness to explore Zoho applications (People, Recruit, Creator, CRM, etc.).
- Excellent communication and documentation skills.
Good to Have (Optional)
- Exposure to HTML, CSS, or SQL.
- Experience with workflow automation or no-code platforms.
- Familiarity with SaaS ecosystems or business process automation tools.
Internship Details
- Location: 91Springboard, Plot No. D-107, Sector 2, Noida, Uttar Pradesh – 201301
- Duration: 6 Months (Full-time, Office-based)
- Working Days: Monday to Friday
- Conversion: Strong possibility of a Full-Time Offer based on performance
Why Join Us
At MyOperator, you’ll gain hands-on experience with one of the largest SaaS ecosystems, working on real-world automations, API integrations, and workflow engineering. You’ll learn directly from experienced developers, gain exposure to internal business systems, and contribute to automating operations for a fast-scaling AI-led company.
This internship provides a strong foundation to grow into roles such as Zoho Developer, Automation Engineer, or Internal Tools Engineer, along with an opportunity for full-time employment upon successful completion.
MANDATORY:
- Super Quality Data Architect, Data Engineering Manager / Director Profile
- Must have 12+ YOE in Data Engineering roles, with at least 2+ years in a Leadership role
- Must have 7+ YOE in hands-on Tech development with Java (Highly preferred) or Python, Node.JS, GoLang
- Must have strong experience in large data technologies, tools like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto etc.
- Strong expertise in HLD and LLD, to design scalable, maintainable data architectures.
- Must have managed a team of at least 5+ Data Engineers (Read Leadership role in CV)
- Product Companies (Prefers high-scale, data-heavy companies)
PREFERRED:
- Must be from Tier - 1 Colleges, preferred IIT
- Candidates must have spent a minimum 3 yrs in each company.
- Must have recent 4+ YOE with high-growth Product startups, and should have implemented Data Engineering systems from an early stage in the Company
ROLES & RESPONSIBILITIES:
- Lead and mentor a team of data engineers, ensuring high performance and career growth.
- Architect and optimize scalable data infrastructure, ensuring high availability and reliability.
- Drive the development and implementation of data governance frameworks and best practices.
- Work closely with cross-functional teams to define and execute a data roadmap.
- Optimize data processing workflows for performance and cost efficiency.
- Ensure data security, compliance, and quality across all data platforms.
- Foster a culture of innovation and technical excellence within the data team.
IDEAL CANDIDATE:
- 10+ years of experience in software/data engineering, with at least 3+ years in a leadership role.
- Expertise in backend development with programming languages such as Java, PHP, Python, Node.JS, GoLang, JavaScript, HTML, and CSS.
- Proficiency in SQL, Python, and Scala for data processing and analytics.
- Strong understanding of cloud platforms (AWS, GCP, or Azure) and their data services.
- Strong foundation and expertise in HLD and LLD, as well as design patterns, preferably using Spring Boot or Google Guice
- Experience in big data technologies such as Spark, Hadoop, Kafka, and distributed computing frameworks.
- Hands-on experience with data warehousing solutions such as Snowflake, Redshift, or BigQuery
- Deep knowledge of data governance, security, and compliance (GDPR, SOC2, etc.).
- Experience in NoSQL databases like Redis, Cassandra, MongoDB, and TiDB.
- Familiarity with automation and DevOps tools like Jenkins, Ansible, Docker, Kubernetes, Chef, Grafana, and ELK.
- Proven ability to drive technical strategy and align it with business objectives.
- Strong leadership, communication, and stakeholder management skills.
PREFERRED QUALIFICATIONS:
- Experience in machine learning infrastructure or MLOps is a plus.
- Exposure to real-time data processing and analytics.
- Interest in data structures, algorithm analysis and design, multicore programming, and scalable architecture.
- Prior experience in a SaaS or high-growth tech company.
Job Summary:
Deqode is looking for a highly motivated and experienced Python + AWS Developer to join our growing technology team. This role demands hands-on experience in backend development, cloud infrastructure (AWS), containerization, automation, and client communication. The ideal candidate should be a self-starter with a strong technical foundation and a passion for delivering high-quality, scalable solutions in a client-facing environment.
Key Responsibilities:
- Design, develop, and deploy backend services and APIs using Python.
- Build and maintain scalable infrastructure on AWS (EC2, S3, Lambda, RDS, etc.).
- Automate deployments and infrastructure with Terraform and Jenkins/GitHub Actions.
- Implement containerized environments using Docker and manage orchestration via Kubernetes.
- Write automation and scripting solutions in Bash/Shell to streamline operations.
- Work with relational databases like MySQL and SQL, including query optimization.
- Collaborate directly with clients to understand requirements and provide technical solutions.
- Ensure system reliability, performance, and scalability across environments.
Required Skills:
- 3.5+ years of hands-on experience in Python development.
- Strong expertise in AWS services such as EC2, Lambda, S3, RDS, IAM, CloudWatch.
- Good understanding of Terraform or other Infrastructure as Code tools.
- Proficient with Docker and container orchestration using Kubernetes.
- Experience with CI/CD tools like Jenkins or GitHub Actions.
- Strong command of SQL/MySQL and scripting with Bash/Shell.
- Experience working with external clients or in client-facing roles.
Preferred Qualifications:
- AWS Certification (e.g., AWS Certified Developer or DevOps Engineer).
- Familiarity with Agile/Scrum methodologies.
- Strong analytical and problem-solving skills.
- Excellent communication and stakeholder management abilities.
Job Title : Senior QA Automation Architect (Cloud & Kubernetes)
Experience : 6+ Years
Location : India (Multiple Offices)
Shift Timings : 12 PM to 9 PM (Noon Shift)
Working Days : 5 Days WFO (NO Hybrid)
About the Role :
We’re looking for a Senior QA Automation Architect with deep expertise in cloud-native systems, Kubernetes, and automation frameworks.
You’ll design scalable test architectures, enhance automation coverage, and ensure product reliability across hybrid-cloud and distributed environments.
Key Responsibilities :
- Architect and maintain test automation frameworks for microservices.
- Integrate automated tests into CI/CD pipelines (Jenkins, GitHub Actions).
- Ensure reliability, scalability, and observability of test systems.
- Work closely with DevOps and Cloud teams to streamline automation infrastructure.
Mandatory Skills :
- Kubernetes, Helm, Docker, Linux
- Cloud Platforms : AWS / Azure / GCP
- CI/CD Tools : Jenkins, GitHub Actions
- Scripting : Python, Pytest, Bash
- Monitoring & Performance : Prometheus, Grafana, Jaeger, K6
- IaC Practices : Terraform / Ansible
Good to Have :
- Experience with Service Mesh (Istio/Linkerd).
- Container Security or DevSecOps exposure.
SENIOR DATA ENGINEER:
ROLE SUMMARY:
Own the design and delivery of petabyte-scale data platforms and pipelines across AWS and modern Lakehouse stacks. You’ll architect, code, test, optimize, and operate ingestion, transformation, storage, and serving layers. This role requires autonomy, strong engineering judgment, and partnership with project managers, infrastructure teams, testers, and customer architects to land secure, cost-efficient, and high-performing solutions.
RESPONSIBILITIES:
- Architecture and design: Create HLD/LLD/SAD, source–target mappings, data contracts, and optimal designs aligned to requirements.
- Pipeline development: Build and test robust ETL/ELT for batch, micro-batch, and streaming across RDBMS, flat files, APIs, and event sources.
- Performance and cost tuning: Profile and optimize jobs, right-size infrastructure, and model license/compute/storage costs.
- Data modeling and storage: Design schemas and SCD strategies; manage relational, NoSQL, data lakes, Delta Lakes, and Lakehouse tables.
- DevOps and release: Establish coding standards, templates, CI/CD, configuration management, and monitored release processes.
- Quality and reliability: Define DQ rules and lineage; implement SLA tracking, failure detection, RCA, and proactive defect mitigation.
- Security and governance: Enforce IAM best practices, retention, audit/compliance; implement PII detection and masking.
- Orchestration: Schedule and govern pipelines with Airflow and serverless event-driven patterns.
- Stakeholder collaboration: Clarify requirements, present design options, conduct demos, and finalize architectures with customer teams.
- Leadership: Mentor engineers, set FAST goals, drive upskilling and certifications, and support module delivery and sprint planning.
REQUIRED QUALIFICATIONS:
- Experience: 15+ years designing distributed systems at petabyte scale; 10+ years building data lakes and multi-source ingestion.
- Cloud (AWS): IAM, VPC, EC2, EKS/ECS, S3, RDS, DMS, Lambda, CloudWatch, CloudFormation, CloudTrail.
- Programming: Python (preferred), PySpark, SQL for analytics, window functions, and performance tuning.
- ETL tools: AWS Glue, Informatica, Databricks, GCP DataProc; orchestration with Airflow.
- Lakehouse/warehousing: Snowflake, BigQuery, Delta Lake/Lakehouse; schema design, partitioning, clustering, performance optimization.
- DevOps/IaC: Terraform with 15+ years of practice; CI/CD (GitHub Actions, Jenkins) with 10+ years; config governance and release management.
- Serverless and events: Design event-driven distributed systems on AWS.
- NoSQL: 2–3 years with DocumentDB including data modeling and performance considerations.
- AI services: AWS Entity Resolution, AWS Comprehend; run custom LLMs on Amazon SageMaker; use LLMs for PII classification.
NICE-TO-HAVE QUALIFICATIONS:
- Data governance automation: 10+ years defining audit, compliance, retention standards and automating governance workflows.
- Table and file formats: Apache Parquet; Apache Iceberg as analytical table format.
- Advanced LLM workflows: RAG and agentic patterns over proprietary data; re-ranking with index/vector store results.
- Multi-cloud exposure: Azure ADF/ADLS, GCP Dataflow/DataProc; FinOps practices for cross-cloud cost control.
OUTCOMES AND MEASURES:
- Engineering excellence: Adherence to processes, standards, and SLAs; reduced defects and non-compliance; fewer recurring issues.
- Efficiency: Faster run times and lower resource consumption with documented cost models and performance baselines.
- Operational reliability: Faster detection, response, and resolution of failures; quick turnaround on production bugs; strong release success.
- Data quality and security: High DQ pass rates, robust lineage, minimal security incidents, and audit readiness.
- Team and customer impact: On-time milestones, clear communication, effective demos, improved satisfaction, and completed certifications/training.
LOCATION AND SCHEDULE:
● Location: Outside US (OUS).
● Schedule: Minimum 6 hours of overlap with US time zones.
Skills and competencies:
Required:
· Strong analytical skills in conducting sophisticated statistical analysis using bureau/vendor data, customer performance
Data and macro-economic data to solve business problems.
· Working experience in languages PySpark & Scala to develop code to validate and implement models and codes in
Credit Risk/Banking
· Experience with distributed systems such as Hadoop/MapReduce, Spark, streaming data processing, cloud architecture.
- Familiarity with machine learning frameworks and libraries (like scikit-learn, SparkML, tensorflow, pytorch etc.
- Experience in systems integration, web services, batch processing
- Experience in migrating codes to PySpark/Scala is big Plus
- The ability to act as liaison conveying information needs of the business to IT and data constraints to the business
applies equal conveyance regarding business strategy and IT strategy, business processes and work flow
· Flexibility in approach and thought process
· Attitude to learn and comprehend the periodical changes in the regulatory requirement as per FED
Job Overview:
We are looking for a skilled Senior Backend Engineer to join our team. The ideal candidate will have a strong foundation in Java and Spring, with proven experience in building scalable microservices and backend systems. This role also requires familiarity with automation tools, Python development, and working knowledge of AI technologies.
Responsibilities:
- Design, develop, and maintain backend services and microservices.
- Build and integrate RESTful APIs across distributed systems.
- Ensure performance, scalability, and reliability of backend systems.
- Collaborate with cross-functional teams and participate in agile development.
- Deploy and maintain applications on AWS cloud infrastructure.
- Contribute to automation initiatives and AI/ML feature integration.
- Write clean, testable, and maintainable code following best practices.
- Participate in code reviews and technical discussions.
Required Skills:
- 4+ years of backend development experience.
- Strong proficiency in Java and Spring/Spring Boot frameworks.
- Solid understanding of microservices architecture.
- Experience with REST APIs, CI/CD, and debugging complex systems.
- Proficient in AWS services such as EC2, Lambda, S3.
- Strong analytical and problem-solving skills.
- Excellent communication in English (written and verbal).
Good to Have:
- Experience with automation tools like Workato or similar.
- Hands-on experience with Python development.
- Familiarity with AI/ML features or API integrations.
- Comfortable working with US-based teams (flexible hours).
🚀 Hiring: Python Full Stack Developer
⭐ Experience: 4+ Years
📍 Location: Gurgaon
⭐ Work Mode:- Hybrid
⏱️ Notice Period: Immediate Joiners
(Only immediate joiners & candidates serving notice period)
🎇 About the Role:-
We are looking for an experienced Python Full Stack Developer (Backend Focus) with 4–6 years of experience to join our dynamic team. You will play a key role in backend development, API design, and data processing, while also contributing to frontend tasks when needed. This position provides excellent opportunities for growth and exposure to cutting-edge technologies.
✨ Required Skills & Experience
✅ Backend Development: Python (Django/Flask), MVC patterns
✅ Databases: SQL, PostgreSQL/MySQL
✅ API Development: RESTful APIs
✅ Testing: pytest, unittest, TDD
✅ Version Control: Git workflows
✅ Frontend Basics: React, JavaScript
✅ DevOps & Tools: Docker basics, CI/CD concepts, JSON/XML/CSV handling
✅ Cloud: Basic Azure knowledge
About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
About Role:
The Senior Python Developer will lead the design and implementation of ACL crawler connectors for Workato’s search platform. This role requires deep expertise in building scalable Python services, integrating with various SaaS APIs and designing robust data models. The developer will mentor junior team members and ensure that the solutions meet the technical and performance requirements outlined in the Statement of Work.
Key Responsibilities:
- Architecture and design: Translate business requirements into technical designs for ACL crawler connectors. Define data models, API interactions and modular components using the Workato SDK.
- Implementation: Build Python services to authenticate, enumerate domain entities and extract ACL information from OneDrive, ServiceNow, HubSpot and GitHub. Implement incremental sync, pagination, concurrency and caching.
- Performance optimisation: Profile code, parallelise API calls and utilise asynchronous programming to meet crawl time SLAs. Implement retry logic and error handling for network‑bound operations.
- Testing and code quality: Develop unit and integration tests, perform code reviews and enforce best practices (type hints, linting). Produce performance reports and documentation.
- Mentoring and collaboration: Guide junior developers, collaborate with QA, DevOps and product teams, and participate in design reviews and sprint planning.
- Hypercare support: Provide Level 2/3 support during the initial rollout, troubleshoot issues, implement minor enhancements and deliver knowledge transfer sessions.
Must Have Skills and Experiences:
- Bachelor’s degree in Computer Science or related field.
- 3-8 years of Python development experience, including asynchronous programming and API integration.
- Knowledge of python libraries-pandas,pytest,requests,asyncio
- Strong understanding of authentication protocols (OAuth 2.0, API keys) and access‑control models.
- Experience with integration with cloud or SaaS platforms such as Microsoft Graph, ServiceNow REST API, HubSpot API, GitHub API.
- Proven ability to lead projects and mentor other engineers.
- Excellent communication skills and ability to produce clear documentation.
Optional/Good to Have Skills and Experiences:
- Experience with integration with Microsoft Graph API, ServiceNow REST API, HubSpot API, GitHub API.
- Familiarity with the following libraries, tools and technologies will be advantageous-aiohttp,PyJWT,aiofiles / aiocache
- Experience with containerisation (Docker), CI/CD pipelines and Workato’s connector SDK is also considered a plus.
🚀 About Us
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews — all while improving their SEO.
We’re also the creators of Convertlens, our generative AI-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We’re a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solves real problems — you’ll fit right in.
🛠️ What You’ll Do
- Build and maintain scalable Python back-end systems that power Convertlens and internal applications.
- Develop Agentic AI applications and workflows to drive automation and insights.
- Design and implement connectors to third-party systems (APIs, CRMs, marketing tools) to source and unify data.
- Ensure system reliability with strong practices in observability, monitoring, and troubleshooting.
⚙️ What You Bring
- 2+ years of hands-on experience in Python back-end development.
- Strong understanding of REST API design and integration.
- Proficiency with relational databases (MySQL/PostgreSQL).
- Familiarity with observability tools (logging, monitoring, tracing — e.g., OpenTelemetry, Prometheus, Grafana, ELK).
- Experience maintaining production systems with a focus on reliability and scalability.
- Bonus: Exposure to Node.js and modern front-end frameworks like ReactJs.
- Strong problem-solving skills and comfort working in a startup/product environment.
- A builder mindset — scrappy, curious, and ready to ship.
💼 Perks & Culture
- Flexible work setup — remote-first for most, hybrid if you’re in Delhi NCR.
- A high-growth, high-impact environment where your code goes live fast.
- Opportunities to work with Agentic AI and cutting-edge tech.
- Small team, big vision — your work truly matters here.
Job Description
We are looking for a talented Java Developer to work in abroad countries. You will be responsible for developing high-quality software solutions, working on both server-side components and integrations, and ensuring optimal performance and scalability.
Preferred Qualifications
- Experience with microservices architecture.
- Knowledge of cloud platforms (AWS, Azure).
- Familiarity with Agile/Scrum methodologies.
- Understanding of front-end technologies (HTML, CSS, JavaScript) is a plus.
Requirment Details
Bachelor’s degree in Computer Science, Information Technology, or a related field (or equivalent experience).
Proven experience as a Java Developer or similar role.
Strong knowledge of Java programming language and its frameworks (Spring, Hibernate).
Experience with relational databases (e.g., MySQL, PostgreSQL) and ORM tools.
Familiarity with RESTful APIs and web services.
Understanding of version control systems (e.g., Git).
Solid understanding of object-oriented programming (OOP) principles.
Strong problem-solving skills and attention to detail.
🚀 We’re Hiring: Senior Cloud & ML Infrastructure Engineer 🚀
We’re looking for an experienced engineer to lead the design, scaling, and optimization of cloud-native ML infrastructure on AWS.
If you’re passionate about platform engineering, automation, and running ML systems at scale, this role is for you.
What you’ll do:
🔹 Architect and manage ML infrastructure with AWS (SageMaker, Step Functions, Lambda, ECR)
🔹 Build highly available, multi-region solutions for real-time & batch inference
🔹 Automate with IaC (AWS CDK, Terraform) and CI/CD pipelines
🔹 Ensure security, compliance, and cost efficiency
🔹 Collaborate across DevOps, ML, and backend teams
What we’re looking for:
✔️ 6+ years AWS cloud infrastructure experience
✔️ Strong ML pipeline experience (SageMaker, ECS/EKS, Docker)
✔️ Proficiency in Python/Go/Bash scripting
✔️ Knowledge of networking, IAM, and security best practices
✔️ Experience with observability tools (CloudWatch, Prometheus, Grafana)
✨ Nice to have: Robotics/IoT background (ROS2, Greengrass, Edge Inference)
📍 Location: Bengaluru, Hyderabad, Mumbai, Pune, Mohali, Delhi
5 days working, Work from Office
Night shifts: 9pm to 6am IST
👉 If this sounds like you (or someone you know), let’s connect!
Apply here:
About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Job Summary:
We are seeking a highly innovative and skilled AI Engineer to join our AI CoE for the Data Integration Project. The ideal candidate will be responsible for designing, developing, and deploying intelligent assets and AI agents that automate and optimize various stages of the data ingestion and integration pipeline. This role requires expertise in machine learning, natural language processing (NLP), knowledge representation, and cloud platform services, with a strong focus on building scalable and accurate AI solutions.
Key Responsibilities:
- LLM-based Auto-schematization: Develop and refine LLM-based models and techniques for automatically inferring schemas from diverse unstructured and semi-structured public datasets and mapping them to a standardized vocabulary.
- Entity Resolution & ID Generation AI: Design and implement AI models for highly accurate entity resolution, matching new entities with existing IDs and generating unique, standardized IDs for newly identified entities.
- Automated Data Profiling & Schema Detection: Develop AI/ML accelerators for automated data profiling, pattern detection, and schema detection to understand data structure and quality at scale.
- Anomaly Detection & Smart Imputation: Create AI-powered solutions for identifying outliers, inconsistencies, and corrupt records, and for intelligently filling missing values using machine learning algorithms.
- Multilingual Data Integration AI: Develop AI assets for accurately interpreting, translating (leveraging automated tools with human-in-the-loop validation), and semantically mapping data from diverse linguistic sources, preserving meaning and context.
- Validation Automation & Error Pattern Recognition: Build AI agents to run comprehensive data validation tool checks, identify common error types, suggest fixes, and automate common error corrections.
- Knowledge Graph RAG/RIG Integration: Integrate Retrieval Augmented Generation (RAG) and Retrieval Augmented Indexing (RIG) techniques to enhance querying capabilities and facilitate consistency checks within the Knowledge Graph.
- MLOps Implementation: Implement and maintain MLOps practices for the lifecycle management of AI models, including versioning, deployment, monitoring, and retraining on a relevant AI platform.
- Code Generation & Documentation Automation: Develop AI tools for generating reusable scripts, templates, and comprehensive import documentation to streamline development.
- Continuous Improvement Systems: Design and build learning systems, feedback loops, and error analytics mechanisms to continuously improve the accuracy and efficiency of AI-powered automation over time.
Required Skills and Qualifications:
- Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field.
- Proven experience (e.g., 3+ years) as an AI/ML Engineer, with a strong portfolio of deployed AI solutions.
- Strong expertise in Natural Language Processing (NLP), including experience with Large Language Models (LLMs) and their applications in data processing.
- Proficiency in Python and relevant AI/ML libraries (e.g., TensorFlow, PyTorch, scikit-learn).
- Hands-on experience with cloud AI/ML services,
- Understanding of knowledge representation, ontologies (e.g., Schema.org, RDF), and knowledge graphs.
- Experience with data quality, validation, and anomaly detection techniques.
- Familiarity with MLOps principles and practices for model deployment and lifecycle management.
- Strong problem-solving skills and an ability to translate complex data challenges into AI solutions.
- Excellent communication and collaboration skills.
Preferred Qualifications:
- Experience with data integration projects, particularly with large-scale public datasets.
- Familiarity with knowledge graph initiatives.
- Experience with multilingual data processing and AI.
- Contributions to open-source AI/ML projects.
- Experience in an Agile development environment.
Benefits:
- Opportunity to work on a high-impact project at the forefront of AI and data integration.
- Contribute to solidifying a leading data initiative's role as a foundational source for grounding Large Models.
- Access to cutting-edge cloud AI technologies.
- Collaborative, innovative, and fast-paced work environment.
- Significant impact on data quality and operational efficiency.
Position: Junior AI Research Engineer (Generative AI)
Location: Noida
Company: CodeFire Technologies Pvt. Ltd.
About the Role:
Looking for a sharp and motivated Junior AI Research Engineer to join our team and work on cutting-edge Generative AI projects. If you're from a premier institute (IIT/NIT/IIIT), work on cutting edge Gen AI solutions, and love solving complex problems, this is your chance to work hands-on with large language models, GenAI tools, and cutting edge AI solutions.
What You'll Do:
1. Understand nuances of multiple LLMs and use them in developing applications
2. Explore different techniques of prompt engineering and measure its impact on solutions
3. Look out for latest research and keep yourself updated
4. Be part of core group that leads Gen AI practice in the company
What We're Looking For:
1. Recent graduate (any branch) from IIT/NIT/IIIT
2. Strong analytical and logical reasoning
3. Research oriented mindset
4. Passionate to be on the forefront of GenAI revolution
5. Hands-on experience (projects, papers, GitHub, etc.) in AI
6. Worked with Python, PyTorch/TensorFlow, LangChain, or OpenAI APIs
Why Join Us:
1. Work on meaningful, cutting edge, real-world GenAI applications
2. Mentorship from experienced tech leaders
3. Flexible and innovation-driven culture
4. Exposure to early-stage AI product building
We are seeking a highly skilled Qt/QML Engineer to design and develop advanced GUIs for aerospace applications. The role requires working closely with system architects, avionics software engineers, and mission systems experts to create reliable, intuitive, and real-time UI for mission-critical systems such as UAV ground control stations, and cockpit displays.
Key Responsibilities
- Design, develop, and maintain high-performance UI applications using Qt/QML (Qt Quick, QML, C++).
- Translate system requirements into responsive, interactive, and user-friendly interfaces.
- Integrate UI components with real-time data streams from avionics systems, UAVs, or mission control software.
- Collaborate with aerospace engineers to ensure compliance with DO-178C, or MIL-STD guidelines where applicable.
- Optimise application performance for low-latency visualisation in mission-critical environments.
- Implement data visualisation (raster and vector maps, telemetry, flight parameters, mission planning overlays).
- Write clean, testable, and maintainable code while adhering to aerospace software standards.
- Work with cross-functional teams (system engineers, hardware engineers, test teams) to validate UI against operational requirements.
- Support debugging, simulation, and testing activities, including hardware-in-the-loop (HIL) setups.
Required Qualifications
- Bachelor’s / Master’s degree in Computer Science, Software Engineering, or related field.
- 1-3 years of experience in developing Qt/QML-based applications (Qt Quick, QML, Qt Widgets).
- Strong proficiency in C++ (11/14/17) and object-oriented programming.
- Experience integrating UI with real-time data sources (TCP/IP, UDP, serial, CAN, DDS, etc.).
- Knowledge of multithreading, performance optimisation, and memory management.
- Familiarity with aerospace/automotive domain software practices or mission-critical systems.
- Good understanding of UX principles for operator consoles and mission planning systems.
- Strong problem-solving, debugging, and communication skills.
Desirable Skills
- Experience with GIS/Mapping libraries (OpenSceneGraph, Cesium, Marble, etc.).
- Knowledge of OpenGL, Vulkan, or 3D visualisation frameworks.
- Exposure to DO-178C or aerospace software compliance.
- Familiarity with UAV ground control software (QGroundControl, Mission Planner, etc.) or similar mission systems.
- Experience with Linux and cross-platform development (Windows/Linux).
- Scripting knowledge in Python for tooling and automation.
- Background in defence, aerospace, automotive or embedded systems domain.
What We Offer
- Opportunity to work on cutting-edge aerospace and defence technologies.
- Collaborative and innovation-driven work culture.
- Exposure to real-world avionics and mission systems.
- Growth opportunities in autonomy, AI/ML for aerospace, and avionics UI systems.
We are seeking a talented and passionate Data Engineer to join our growing data team. In this role, you will be responsible for building, maintaining, and optimizing our data pipelines and infrastructure on Google Cloud Platform (GCP). The ideal candidate will have a strong background in data warehousing, ETL/ELT processes, and a passion for turning raw data into actionable insights. You will work closely with data scientists, analysts, and other engineers to support a variety of data-driven initiatives.
Responsibilities:
• Design, develop, and maintain scalable and reliable data pipelines using Dataform or DBT.
• Build and optimize data warehousing solutions on Google BigQuery.
• Develop and manage data workflows using Apache Airflow.
• Write complex and efficient SQL queries for data extraction, transformation, and analysis.
• Develop Python-based scripts and applications for data processing and automation.
• Collaborate with data scientists and analysts to understand their data requirements and provide solutions.
• Implement data quality checks and monitoring to ensure data accuracy and consistency.
• Optimize data pipelines for performance, scalability, and cost-effectiveness.
• Contribute to the design and implementation of data infrastructure best practices.
• Troubleshoot and resolve data-related issues.
• Stay up-to-date with the latest data engineering trends and technologies, particularly within the Google Cloud ecosystem.
Qualifications:
• Bachelor's degree in Computer Science, a related technical field, or equivalent practical experience.
• 3-4 years of experience in a Data Engineer role.
• Strong expertise in SQL (preferably with BigQuery SQL).
• Proficiency in Python programming for data manipulation and automation.
• Hands-on experience with Google Cloud Platform (GCP) and its data services.
• Solid understanding of data warehousing concepts and ETL/ELT methodologies.
• Experience with Dataform or DBT for data transformation and modeling.
• Experience with workflow management tools such as Apache Airflow.
• Excellent problem-solving and analytical skills.
• Strong communication and collaboration skills.
• Ability to work independently and as part of a team.
Preferred Qualifications:
• Google Cloud Professional Data Engineer certification.
• Knowledge of data modeling techniques (e.g., dimensional modeling, star schema).
• Familiarity with Agile development methodologies
Development and Customization:
Build and customize Frappe modules to meet business requirements.
Develop new functionalities and troubleshoot issues in ERPNext applications.
Integrate third-party APIs for seamless interoperability.
Technical Support:
Provide technical support to end-users and resolve system issues.
Maintain technical documentation for implementations.
Collaboration:
Work with teams to gather requirements and recommend solutions.
Participate in code reviews for quality standards.
Continuous Improvement:
Stay updated with Frappe developments and optimize application performance.
Skills Required:
Proficiency in Python, JavaScript, and relational databases.
Knowledge of Frappe/ERPNext framework and object-oriented programming.
Experience with Git for version control.
Strong analytical skill
Job Title : Python Developer - Web3 (Mandatory) & Trading Bot Creation (Optional)
Experience : 2+ Years
Location : Noida (On-site)
Working Days : 6 Days WFO (Monday to Friday - WFO & Saturday - WFH)
Job Type : Full-time
Mandatory Skills : Python, Web3 (web3.py/ethers), smart contract interaction, real-time APIs/WebSockets, Git/Docker, security handling.
Responsibilities :
- Build and optimize Web3-based applications & integrations using Python.
- Interact with smart contracts and manage on-chain/off-chain data flows.
- Ensure secure key management, scalability, and performance.
- (Optional) Develop and enhance automated trading bots & strategies.
Required Skills :
- Strong experience in Python development.
- Proficiency in Web3 (web3.py/ethers) and smart contract interaction.
- Hands-on with real-time APIs, WebSockets, Git/Docker.
- Knowledge of security handling & key management.
- (Optional) Trading bot development, CEX/DEX APIs, backtesting (pandas/numpy).
Job Title: L3 SDE (Python- Django)
Location: Arjan Garh, MG Road (Delhi)
Job Type: Full-time, On site
Pay Range: RS. 30,000- 70,000
**IMMEDIATE JOINERS REQUIRED**
About Us:
Our Aim is to develop ‘More Data, More Opportunities’. We take pride in building a cutting-edge AI solutions to help financial institutions mitigate risk and generate comprehensive data. Elevate Your Business's Credibility with Timble Glance's Verification and Authentication Solutions.
Responsibilities
• Writing and testing code, debugging programs, and integrating applications with third-party web services. To be successful in this role, you should have experience using server-side logic and work well in a team. Ultimately, you’ll build highly responsive web applications that align with our client’s business needs
• Write effective, scalable code
• Develop back-end components to improve responsiveness and overall performance
• Integrate user-facing elements into applications
• Improve functionality of existing systems
• Implement security and data protection solutions
• Assess and prioritize feature requests
• Coordinate with internal teams to understand user requirements and provide technical solutions
• Creates customized applications for smaller tasks to enhance website capability based on business needs
• Builds table frames and forms and writes script within the browser to enhance site functionality
• Ensures web pages are functional across different browser types; conducts tests to verify user functionality
• Verifies compliance with accessibility standards
• Assists in resolving moderately complex production support problems
Profile Requirements
* IMMEDIATE JOINERS REQUIRED
* 2 years or more experience as a Python Developer
* Expertise in at least one Python framework required Django
* Knowledge of object-relational mapping (ORM)
* Familiarity with front-end technologies like JavaScript, HTML5, and CSS3
* Familiarity with event-driven programming in Python
* Good understanding of the operating system and networking concepts.
* Good analytical and troubleshooting skills
* Graduation/Post Graduation in Computer Science / IT / Software Engineering
* Decent verbal and written communication skills to communicate with customers, support personnel, and management
How to apply: Drop your CV at linkedin.com/in/preeti-bisht-1633b1263/ with Current CTC, Current Notice and Expected CTC
Key Responsibilities
- Design and implement ETL/ELT pipelines using Databricks, PySpark, and AWS Glue
- Develop and maintain scalable data architectures on AWS (S3, EMR, Lambda, Redshift, RDS)
- Perform data wrangling, cleansing, and transformation using Python and SQL
- Collaborate with data scientists to integrate Generative AI models into analytics workflows
- Build dashboards and reports to visualize insights using tools like Power BI or Tableau
- Ensure data quality, governance, and security across all data assets
- Optimize performance of data pipelines and troubleshoot bottlenecks
- Work closely with stakeholders to understand data requirements and deliver actionable insights
🧪 Required Skills
Skill AreaTools & TechnologiesCloud PlatformsAWS (S3, Lambda, Glue, EMR, Redshift)Big DataDatabricks, Apache Spark, PySparkProgrammingPython, SQLData EngineeringETL/ELT, Data Lakes, Data WarehousingAnalyticsData Modeling, Visualization, BI ReportingGen AI IntegrationOpenAI, Hugging Face, LangChain (preferred)DevOps (Bonus)Git, Jenkins, Terraform, Docker
📚 Qualifications
- Bachelor's or Master’s degree in Computer Science, Data Science, or related field
- 3+ years of experience in data engineering or data analytics
- Hands-on experience with Databricks, PySpark, and AWS
- Familiarity with Generative AI tools and frameworks is a strong plus
- Strong problem-solving and communication skills
🌟 Preferred Traits
- Analytical mindset with attention to detail
- Passion for data and emerging technologies
- Ability to work independently and in cross-functional teams
- Eagerness to learn and adapt in a fast-paced environment
EDI Developer / Map Conversion Specialist
Role Summary:
Responsible for converting 441 existing EDI maps into the PortPro-compatible format and testing them for 147 customer configurations.
Key Responsibilities:
- Analyze existing EDI maps in Profit Tools.
- Convert, reconfigure, or rebuild maps for PortPro.
- Ensure accuracy in mapping and transformation logic.
- Unit test and debug EDI transactions.
- Support system integration and UAT phases.
Skills Required:
- Proficiency in EDI standards (X12, EDIFACT) and transaction sets.
- Hands-on experience in EDI mapping tools.
- Familiarity with both Profit Tools and PortPro data structures.
- SQL and XML/JSON data handling skills.
- Experience with scripting for automation (Python, Shell scripting preferred).
- Strong troubleshooting and debugging skills.
About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Role Overview:
As a Senior Data Scientist / AI Engineer, you will be a key player in our technical leadership. You will be responsible for designing, developing, and deploying sophisticated AI and Machine Learning solutions, with a strong emphasis on Generative AI and Large Language Models (LLMs). You will architect and manage scalable AI microservices, drive research into state-of-the-art techniques, and translate complex business requirements into tangible, high-impact products. This role requires a blend of deep technical expertise, strategic thinking, and leadership.
Key Responsibilities:
- Architect & Develop AI Solutions: Design, build, and deploy robust and scalable machine learning models, with a primary focus on Natural Language Processing (NLP), Generative AI, and LLM-based Agents.
- Build AI Infrastructure: Create and manage AI-driven microservices using frameworks like Python FastAPI, ensuring high performance and reliability.
- Lead AI Research & Innovation: Stay abreast of the latest advancements in AI/ML. Lead research initiatives to evaluate and implement state-of-the-art models and techniques for performance and cost optimization.
- Solve Business Problems: Collaborate with product and business teams to understand challenges and develop data-driven solutions that create significant business value, such as building business rule engines or predictive classification systems.
- End-to-End Project Ownership: Take ownership of the entire lifecycle of AI projects—from ideation, data processing, and model development to deployment, monitoring, and iteration on cloud platforms.
- Team Leadership & Mentorship: Lead learning initiatives within the engineering team, mentor junior data scientists and engineers, and establish best practices for AI development.
- Cross-Functional Collaboration: Work closely with software engineers to integrate AI models into production systems and contribute to the overall system architecture.
Required Skills and Qualifications
- Master’s (M.Tech.) or Bachelor's (B.Tech.) degree in Computer Science, Artificial Intelligence, Information Technology, or a related field.
- 6+ years of professional experience in a Data Scientist, AI Engineer, or related role.
- Expert-level proficiency in Python and its core data science libraries (e.g., PyTorch, Huggingface Transformers, Pandas, Scikit-learn).
- Demonstrable, hands-on experience building and fine-tuning Large Language Models (LLMs) and implementing Generative AI solutions.
- Proven experience in developing and deploying scalable systems on cloud platforms, particularly AWS. Experience with GCS is a plus.
- Strong background in Natural Language Processing (NLP), including experience with multilingual models and transcription.
- Experience with containerization technologies, specifically Docker.
- Solid understanding of software engineering principles and experience building APIs and microservices.
Preferred Qualifications
- A strong portfolio of projects. A track record of publications in reputable AI/ML conferences is a plus.
- Experience with full-stack development (Node.js, Next.js) and various database technologies (SQL, MongoDB, Elasticsearch).
- Familiarity with setting up and managing CI/CD pipelines (e.g., Jenkins).
- Proven ability to lead technical teams and mentor other engineers.
- Experience developing custom tools or packages for data science workflows.
Job Description: Senior Full-Stack Engineer (MERN + Python )
Location: Noida (Onsite)
Experience: 5 to 10 years
We are hiring a Senior Full-Stack Engineer with proven expertise in MERN technologies and Python backend frameworks to deliver scalable, efficient, and maintainable software solutions. You will design and build web applications and microservices, leveraging FastAPI and advanced asynchronous programming techniques to ensure high performance and reliability.
Key Responsibilities:
- Develop and maintain web applications using the MERN stack alongside Python backend microservices.
- Build efficient and scalable APIs with Python frameworks like FastAPI and Flask, utilizing AsyncIO, multithreading, and multiprocessing for optimal performance.
- Lead architecture and technical decisions spanning both MERN frontend and Python microservices backend.
- Collaborate with UX/UI designers to create intuitive and responsive user interfaces.
- Mentor junior developers and conduct code reviews to ensure adherence to best practices.
- Manage and optimize databases such as MongoDB and PostgreSQL for application and microservices needs.
- Deploy, monitor, and maintain applications and microservices on AWS cloud infrastructure (EC2, Lambda, S3, RDS).
- Implement CI/CD pipelines to automate integration and deployment processes.
- Participate in Agile development practices including sprint planning and retrospectives.
- Ensure application scalability, security, and performance across frontend and backend systems.
- Design cloud-native microservices architectures focused on high availability and fault tolerance.
Required Skills and Experience:
- Strong hands-on experience with the MERN stack: MongoDB, Express.js, React.js, Node.js.
- Proven Python backend development expertise with FastAPI and Flask.
- Deep understanding of asynchronous programming using AsyncIO, multithreading, and multiprocessing.
- Experience designing and developing microservices and RESTful/GraphQL APIs.
- Skilled in database design and optimization for MongoDB and PostgreSQL.
- Familiar with AWS services such as EC2, Lambda, S3, and RDS.
- Experience with Git, CI/CD tools, and automated testing/deployment workflows.
- Ability to lead teams, mentor developers, and make key technical decisions.
- Strong problem-solving, debugging, and communication skills.
- Comfortable working in Agile environments and collaborating cross-functionally.
Sr. Staff Engineer Role
We are looking for a customer-obsessed, analytical Sr. Staff Engineer to lead the development
and growth of our Tax Compliance product suite. In this role, you’ll shape innovative digital
solutions that simplify and automate tax filing, reconciliation, and compliance workflows for
businesses of all sizes. You will join a fast-growing company where you’ll work in a dynamic and
competitive market, impacting how businesses meet their statutory obligations with speed,
accuracy, and confidence.
As the Sr. Staff Engineer, You’ll work closely with product, DevOps, and data teams to architect
reliable systems, drive engineering excellence, and ensure high availability across our platform.
We’re looking for a technical leader who’s not just an expert in building scalable systems, but also
passionate about mentoring engineers and shaping the future of fintech.
Responsibilities
● Lead, mentor, and inspire a high-performing engineering team (or operate as a hands-on
technical lead).
● Drive the design and development of scalable backend services using Python/Node.js.
● Experience in Django, FastApi, Task Orchestration Systems.
● Own and evolve our CI/CD pipelines with Jenkins, ensuring fast, safe, and reliable
deployments.
● Architect and manage infrastructure using AWS and Terraform with a DevOps-first mindset.
● Collaborate cross-functionally with product managers, designers, and compliance experts
to deliver features that make tax compliance seamless for our users.
● Set and enforce engineering best practices, code quality standards, and operational
excellence.
● Stay up-to-date with industry trends and advocate for continuous improvement in
engineering processes.
● Experience in fintech, tax, or compliance industries.
● Familiarity with containerization tools like Docker and orchestration with Kubernetes.
● Background in security, observability, or compliance automation.
Requirements
● 8+ years of software engineering experience, with at least 2+ years in a leadership or
principal-level role.
● Deep expertise in Python/Node.js, including API development, performance optimization,
and testing.
● Experience in Event-driven architecture, kafka/rabbitmq like
● Strong experience with AWS services (e.g., ECS, Lambda, S3, RDS, CloudWatch).
● Solid understanding of Terraform for infrastructure as code.
● Proficiency with Jenkins or similar CI/CD tooling.
● Comfortable balancing technical leadership with hands-on coding and problem-solving.
● Strong communication skills and a collaborative mindset.
We are seeking a highly skilled and motivated Python Developer with hands-on experience in AWS cloud services (Lambda, API Gateway, EC2), microservices architecture, PostgreSQL, and Docker. The ideal candidate will be responsible for designing, developing, deploying, and maintaining scalable backend services and APIs, with a strong emphasis on cloud-native solutions and containerized environments.
Key Responsibilities:
- Develop and maintain scalable backend services using Python (Flask, FastAPI, or Django).
- Design and deploy serverless applications using AWS Lambda and API Gateway.
- Build and manage RESTful APIs and microservices.
- Implement CI/CD pipelines for efficient and secure deployments.
- Work with Docker to containerize applications and manage container lifecycles.
- Develop and manage infrastructure on AWS (including EC2, IAM, S3, and other related services).
- Design efficient database schemas and write optimized SQL queries for PostgreSQL.
- Collaborate with DevOps, front-end developers, and product managers for end-to-end delivery.
- Write unit, integration, and performance tests to ensure code reliability and robustness.
- Monitor, troubleshoot, and optimize application performance in production environments.
Required Skills:
- Strong proficiency in Python and Python-based web frameworks.
- Experience with AWS services: Lambda, API Gateway, EC2, S3, CloudWatch.
- Sound knowledge of microservices architecture and asynchronous programming.
- Proficiency with PostgreSQL, including schema design and query optimization.
- Hands-on experience with Docker and containerized deployments.
- Understanding of CI/CD practices and tools like GitHub Actions, Jenkins, or CodePipeline.
- Familiarity with API documentation tools (Swagger/OpenAPI).
- Version control with Git.
Job Description : Software Testing (Only Female)
VirtuBox, the world's premier B2B Cloud-based SaaS solution, empowers businesses to forge unforgettable customer experiences that transcend screens and ignite brand loyalty i.e. VirtuBox is Transforming Customer Journeys, One Pixel at a Time.
Job Summary :
We are seeking a proactive and detail-oriented Software Tester with 1–2 years of experience in manual and/or automation testing. The ideal candidate will work closely with developers and product teams to ensure high-quality software delivery by identifying bugs, writing test cases, and executing comprehensive test cycles.
Key Responsibilities :
- Analyze software requirements and design test cases to ensure functionality and performance.
- Identify, document, and track defects using bug-tracking tools.
- Collaborate with developers and stakeholders to resolve issues and improve software quality.
- Perform functional, regression, system, and performance testing.
- Execute automated testing using tools like Selenium, JMeter, or Appium.
- Participate in agile development processes, including stand-up meetings and sprint planning.
- Prepare detailed test reports and documentation for stakeholders.
- Conduct security and usability testing to ensure compliance with industry standards.
- Manage test data to create realistic testing scenarios.
- Validate bug fixes and ensure all functionalities work correctly before release.
Skill Required :
- Soft Skills
- Strong analytical and problem-solving skills.
- Good communication and teamwork abilities.
- Attention to detail and ability to work under deadlines.
- Technical skills
- Knowledge of manual testing and automated testing tools (Selenium, JMeter, Appium, etc.).
- Understanding of SDLC (Software Development Life Cycle) and STLC (Software Testing Life Cycle).
- Familiarity with defect tracking tools (JIRA, Bugzilla, etc.).
- Basic programming knowledge (Python, Java, SQL) is a plus.
Eligibility Criteria :
- Bachelor’s degree in Computer Science, IT, or related field.
- 1–2 years of hands-on experience in software testing.
- Excellent analytical and communication skills.
- ISTQB certification is desirable but not mandatory.
- Basic knowledge of any scripting or programming language is a plus.
- Strong problem-solving and analytical skills.
Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.
Job Description:
Title : Python AWS Developer with API
Tech Stack : AWS API gateway, Lambda functionality, Oracle RDS, SQL & database management, (OOPS) principles, Java script, Object relational Mapper, Git, Docker, Java dependency management, CI/CD, AWS cloud & S3, Secret Manager, Python, API frameworks, well-versed with Front and back end programming (python).
Responsibilities:
· Worked on building high-performance APIs using AWS services and Python. Python coding, debugging programs and integrating app with third party web services.
· Troubleshoot and debug non-prod defects, back-end development, API, main focus on coding and monitoring applications.
· Core application logic design.
· Supports dependency teams in UAT testing and perform functional application testing which includes postman testing
Job Summary:
We are looking for a skilled and motivated Python AWS Engineer to join our team. The ideal candidate will have strong experience in backend development using Python, cloud infrastructure on AWS, and building serverless or microservices-based architectures. You will work closely with cross-functional teams to design, develop, deploy, and maintain scalable and secure applications in the cloud.
Key Responsibilities:
- Develop and maintain backend applications using Python and frameworks like Django or Flask
- Design and implement serverless solutions using AWS Lambda, API Gateway, and other AWS services
- Develop data processing pipelines using services such as AWS Glue, Step Functions, S3, DynamoDB, and RDS
- Write clean, efficient, and testable code following best practices
- Implement CI/CD pipelines using tools like CodePipeline, GitHub Actions, or Jenkins
- Monitor and optimize system performance and troubleshoot production issues
- Collaborate with DevOps and front-end teams to integrate APIs and cloud-native services
- Maintain and improve application security and compliance with industry standards
Required Skills:
- Strong programming skills in Python
- Solid understanding of AWS cloud services (Lambda, S3, EC2, DynamoDB, RDS, IAM, API Gateway, CloudWatch, etc.)
- Experience with infrastructure as code (e.g., CloudFormation, Terraform, or AWS CDK)
- Good understanding of RESTful API design and microservices architecture
- Hands-on experience with CI/CD, Git, and version control systems
- Familiarity with containerization (Docker, ECS, or EKS) is a plus
- Strong problem-solving and communication skills
Preferred Qualifications:
- Experience with PySpark, Pandas, or data engineering tools
- Working knowledge of Django, Flask, or other Python frameworks
- AWS Certification (e.g., AWS Certified Developer – Associate) is a plus
Educational Qualification:
- Bachelor's or Master’s degree in Computer Science, Engineering, or related field
Role Overview:
We are seeking a Senior Software Engineer (SSE) with strong expertise in Kafka, Python, and Azure Databricks to lead and contribute to our healthcare data engineering initiatives. This role is pivotal in building scalable, real-time data pipelines and processing large-scale healthcare datasets in a secure and compliant cloud environment.
The ideal candidate will have a solid background in real-time streaming, big data processing, and cloud platforms, along with strong leadership and stakeholder engagement capabilities.
Key Responsibilities:
- Design and develop scalable real-time data streaming solutions using Apache Kafka and Python.
- Architect and implement ETL/ELT pipelines using Azure Databricks for both structured and unstructured healthcare data.
- Optimize and maintain Kafka applications, Python scripts, and Databricks workflows to ensure performance and reliability.
- Ensure data integrity, security, and compliance with healthcare standards such as HIPAA and HITRUST.
- Collaborate with data scientists, analysts, and business stakeholders to gather requirements and translate them into robust data solutions.
- Mentor junior engineers, perform code reviews, and promote engineering best practices.
- Stay current with evolving technologies in cloud, big data, and healthcare data standards.
- Contribute to the development of CI/CD pipelines and containerized environments (Docker, Kubernetes).
Required Skills & Qualifications:
- 4+ years of hands-on experience in data engineering roles.
- Strong proficiency in Kafka (including Kafka Streams, Kafka Connect, Schema Registry).
- Proficient in Python for data processing and automation.
- Experience with Azure Databricks (or readiness to ramp up quickly).
- Solid understanding of cloud platforms, with a preference for Azure (AWS/GCP is a plus).
- Strong knowledge of SQL and NoSQL databases; data modeling for large-scale systems.
- Familiarity with containerization tools like Docker and orchestration using Kubernetes.
- Exposure to CI/CD pipelines for data applications.
- Prior experience with healthcare datasets (EHR, HL7, FHIR, claims data) is highly desirable.
- Excellent problem-solving abilities and a proactive mindset.
- Strong communication and interpersonal skills to work in cross-functional teams.
Hybrid work mode
(Azure) EDW Experience working in loading Star schema data warehouses using framework
architectures including experience loading type 2 dimensions. Ingesting data from various
sources (Structured and Semi Structured), hands on experience ingesting via APIs to lakehouse architectures.
Key Skills: Azure Databricks, Azure Data Factory, Azure Datalake Gen 2 Storage, SQL (expert),
Python (intermediate), Azure Cloud Services knowledge, data analysis (SQL), data warehousing,documentation – BRD, FRD, user story creation.
About the Role
At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution.
Our flagship platform is currently under development. As a Backend Engineer, you will play a foundational role in designing and building the core trading engine and research infrastructure from the ground up. Your work will focus on developing performance-critical components that power backtesting, real-time strategy execution, and seamless integration with brokers and data providers. You’ll be responsible for bridging core engine logic with Python-based strategy interfaces, supporting a modular system architecture for isolated and scalable strategy execution, and building robust abstractions for data handling and API interactions. This role is central to delivering the reliability, flexibility, and performance that our users will rely on in fast-moving financial markets.
We are a remote-first team and are open to hiring exceptional candidates globally.
Core Tasks
· Build and maintain the trading engine core for execution, backtesting, and event logging.
· Develop isolated strategy execution runners to support multi-user, multi-strategy environments.
· Implement abstraction layers for brokers and market data feeds to offer a unified API experience.
· Bridge the core engine language with Python strategies using gRPC, ZeroMQ, or similar interop technologies.
· Implement logic to parse and execute JSON-based strategy DSL from the strategy builder.
· Design compute-optimized components for multi-asset workflows and scalable backtesting.
· Capture real-time state, performance metrics, and slippage for both live and simulated runs.
· Collaborate with infrastructure engineers to support high-availability deployments.
Top Technical Competencies
· Proficiency in distributed systems, concurrency, and system design.
· Strong backend/server-side development skills using C++, Rust, C#, Erlang, or Python.
· Deep understanding of data structures and algorithms with a focus on low-latency performance.
· Experience with event-driven and messaging-based architectures (e.g., ZeroMQ, Redis Streams).
· Familiarity with Linux-based environments and system-level performance tuning.
Bonus Competencies
· Understanding of financial markets, asset classes, and algorithmic trading strategies.
· 3–5 years of prior Backend experience.
· Hands-on experience with backtesting frameworks or financial market simulators.
· Experience with sandboxed execution environments or paper trading platforms.
· Advanced knowledge of multithreading, memory optimization, or compiler construction.
· Educational background from Tier-I or Tier-II institutions with strong computer science fundamentals, a passion for scalable system design, and a drive to build cutting-edge fintech infrastructure.
What We Offer
· Opportunity to shape the backend architecture of a next-gen fintech startup.
· A collaborative, technically driven culture.
· Competitive compensation with performance-based bonuses.
· Flexible working hours and a remote-friendly environment for candidates across the globe.
· Exposure to financial modeling, trading infrastructure, and real-time applications.
· Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna.
Ideal Candidate
You’re a backend-first thinker who’s obsessed with reliability, latency, and architectural flexibility. You enjoy building scalable systems that transform complex strategy logic into high-performance, real-time trading actions. You think in microseconds, architect for fault tolerance, and build APIs designed for developer extensibility.

About NxtWave
NxtWave is one of India’s fastest-growing ed-tech startups, reshaping the tech education landscape by bridging the gap between industry needs and student readiness. With prestigious recognitions such as Technology Pioneer 2024 by the World Economic Forum and Forbes India 30 Under 30, NxtWave’s impact continues to grow rapidly across India.
Our flagship on-campus initiative, NxtWave Institute of Advanced Technologies (NIAT), offers a cutting-edge 4-year Computer Science program designed to groom the next generation of tech leaders, located in Hyderabad’s global tech corridor.
Know more:
🌐 NxtWave | NIAT
About the Role
As a PhD-level Software Development Instructor, you will play a critical role in building India’s most advanced undergraduate tech education ecosystem. You’ll be mentoring bright young minds through a curriculum that fuses rigorous academic principles with real-world software engineering practices. This is a high-impact leadership role that combines teaching, mentorship, research alignment, and curriculum innovation.
Key Responsibilities
- Deliver high-quality classroom instruction in programming, software engineering, and emerging technologies.
- Integrate research-backed pedagogy and industry-relevant practices into classroom delivery.
- Mentor students in academic, career, and project development goals.
- Take ownership of curriculum planning, enhancement, and delivery aligned with academic and industry excellence.
- Drive research-led content development, and contribute to innovation in teaching methodologies.
- Support capstone projects, hackathons, and collaborative research opportunities with industry.
- Foster a high-performance learning environment in classes of 70–100 students.
- Collaborate with cross-functional teams for continuous student development and program quality.
- Actively participate in faculty training, peer reviews, and academic audits.
Eligibility & Requirements
- Ph.D. in Computer Science, IT, or a closely related field from a recognized university.
- Strong academic and research orientation, preferably with publications or project contributions.
- Prior experience in teaching/training/mentoring at the undergraduate/postgraduate level is preferred.
- A deep commitment to education, student success, and continuous improvement.
Must-Have Skills
- Expertise in Python, Java, JavaScript, and advanced programming paradigms.
- Strong foundation in Data Structures, Algorithms, OOP, and Software Engineering principles.
- Excellent communication, classroom delivery, and presentation skills.
- Familiarity with academic content tools like Google Slides, Sheets, Docs.
- Passion for educating, mentoring, and shaping future developers.
Good to Have
- Industry experience or consulting background in software development or research-based roles.
- Proficiency in version control systems (e.g., Git) and agile methodologies.
- Understanding of AI/ML, Cloud Computing, DevOps, Web or Mobile Development.
- A drive to innovate in teaching, curriculum design, and student engagement.
Why Join Us?
- Be at the forefront of shaping India’s tech education revolution.
- Work alongside IIT/IISc alumni, ex-Amazon engineers, and passionate educators.
- Competitive compensation with strong growth potential.
- Create impact at scale by mentoring hundreds of future-ready tech leaders.
A fast-growing, tech-driven loyalty programs and benefits business is looking to hire a Technical Architect with expertise in:
Key Responsibilities:
1. Architectural Design & Governance
• Define, document, and maintain the technical architecture for projects and product modules.
• Ensure architectural decisions meet scalability, performance, and security requirements.
2. Solution Development & Technical Leadership
• Translate product and client requirements into robust technical solutions, balancing short-term deliverables with long-term product viability.
• Oversee system integrations, ensuring best practices in coding standards, security, and performance optimization.
3. Collaboration & Alignment
• Work closely with Product Managers and Project Managers to prioritize and plan feature development.
• Facilitate cross-team communication to ensure technical feasibility and timely execution of features or client deliverables.
4. Mentorship & Code Quality
• Provide guidance to senior developers and junior engineers through code reviews, design reviews, and technical coaching.
• Advocate for best-in-class engineering practices, encouraging the use of CI/CD, automated testing, and modern development tooling.5. Risk Management & Innovation
• Proactively identify technical risks or bottlenecks, proposing mitigation strategies.
• Investigate and recommend new technologies, frameworks, or tools that enhance product capabilities and developer productivity.
6. Documentation & Standards
• Maintain architecture blueprints, design patterns, and relevant documentation to align the team on shared standards.
• Contribute to the continuous improvement of internal processes, ensuring streamlined development and deployment workflows.
Skills:
1. Technical Expertise
• 7–10 years of overall experience in software development with at least a couple of years in senior or lead roles.
• Strong proficiency in at least one mainstream programming language (e.g., Golang,
Python, JavaScript).
• Hands-on experience with architectural patterns (microservices, monolithic systems, event-driven architectures).
• Good understanding of Cloud Platforms (AWS, Azure, or GCP) and DevOps practices
(CI/CD pipelines, containerization with Docker/Kubernetes).
• Familiarity with relational and NoSQL databases (e.g., PostgreSQL, MySQL, MongoDB).
Location: Saket, Delhi (Work from Office)
Schedule: Monday – Friday
Experience : 7-10 Years
Compensation: As per industry standards
Role Overview:
We are looking for a skilled Golang Developer with 3.5+ years of experience in building scalable backend services and deploying cloud-native applications using AWS. This is a key position that requires a deep understanding of Golang and cloud infrastructure to help us build robust solutions for global clients.
Key Responsibilities:
- Design and develop backend services, APIs, and microservices using Golang.
- Build and deploy cloud-native applications on AWS using services like Lambda, EC2, S3, RDS, and more.
- Optimize application performance, scalability, and reliability.
- Collaborate closely with frontend, DevOps, and product teams.
- Write clean, maintainable code and participate in code reviews.
- Implement best practices in security, performance, and cloud architecture.
- Contribute to CI/CD pipelines and automated deployment processes.
- Debug and resolve technical issues across the stack.
Required Skills & Qualifications:
- 3.5+ years of hands-on experience with Golang development.
- Strong experience with AWS services such as EC2, Lambda, S3, RDS, DynamoDB, CloudWatch, etc.
- Proficient in developing and consuming RESTful APIs.
- Familiar with Docker, Kubernetes or AWS ECS for container orchestration.
- Experience with Infrastructure as Code (Terraform, CloudFormation) is a plus.
- Good understanding of microservices architecture and distributed systems.
- Experience with monitoring tools like Prometheus, Grafana, or ELK Stack.
- Familiarity with Git, CI/CD pipelines, and agile workflows.
- Strong problem-solving, debugging, and communication skills.
Nice to Have:
- Experience with serverless applications and architecture (AWS Lambda, API Gateway, etc.)
- Exposure to NoSQL databases like DynamoDB or MongoDB.
- Contributions to open-source Golang projects or an active GitHub portfolio.
🚀 Hiring: Data Engineer | GCP + Spark + Python + .NET |
| 6–10 Yrs | Gurugram (Hybrid)
We’re looking for a skilled Data Engineer with strong hands-on experience in GCP, Spark-Scala, Python, and .NET.
📍 Location: Suncity, Sector 54, Gurugram (Hybrid – 3 days onsite)
💼 Experience: 6–10 Years
⏱️ Notice Period :- Immediate Joiner
Required Skills:
- 5+ years of experience in distributed computing (Spark) and software development.
- 3+ years of experience in Spark-Scala
- 5+ years of experience in Data Engineering.
- 5+ years of experience in Python.
- Fluency in working with databases (preferably Postgres).
- Have a sound understanding of object-oriented programming and development principles.
- Experience working in an Agile Scrum or Kanban development environment.
- Experience working with version control software (preferably Git).
- Experience with CI/CD pipelines.
- Experience with automated testing, including integration/delta, Load, and Performance
Job Description:
Position: Python Technical Architect
Major Responsibilities:
● Develop and customize solutions, including workflows, Workviews, and application integrations.
● Integrate with other enterprise applications and systems.
● Perform system upgrades and migrations to ensure optimal performance.
● Troubleshoot and resolve issues related to applications and workflows using Diagnostic console.
● Ensure data integrity and security within the system.
● Maintain documentation for system configurations, workflows, and processes.
● Stay updated on best practices, new features and industry trends.
● Hands-on in Waterfall & Agile Scrum methodology.
● Working on software issues and specifications and performing Design/Code Review(s).
● Engaging in the assignment of work to the development team resources, ensuring effective transition of knowledge, design assumptions and development expectations.
● Ability to mentor developers and lead cross-functional technical teams.
● Collaborate with stakeholders to gather requirements and translate them into technical specifications for effective workflow/Workview design.
● Assist in the training of end-users and provide support as needed
● Contributing to the organizational values by actively working with agile development teams, methodologies, and toolsets.
● Driving concise, structured, and effective communication with peers and clients.
Key Capabilities and Competencies Knowledge
● Proven experience as a Software Architect or Technical Project Manager with architectural responsibilities.
● Strong proficiency in Python and relevant frameworks (Django, Flask, FastAPI).
● Strong understanding of software development lifecycle (SDLC), agile methodologies (Scrum, Kanban) and DevOps practices.
● Expertise in Azure cloud ecosystem and architecture design patterns.
● Familiarity with Azure DevOps, CI/CD pipelines, monitoring and logging.
● Experience with RESTful APIs, microservices architecture and asynchronous processing.
● Deep understanding of insurance domain processes such as claims management, policy administration etc.
● Experience in database design and data modelling with SQL(MySQL) and NoSQL(Azure Cosmos DB).
● Knowledge of security best practices including data encryption, API security and compliance standards.
● Knowledge of SAST and DAST security tools is a plus.
● Strong documentation skill for articulating architecture decisions and technical concepts to stakeholders.
● Experience with system integration using middleware or web services.
● Server Load Balancing, Planning, configuration, maintenance and administration of the Server Systems.
● Experience with developing reusable assets such as prototypes, solution designs, documentation and other materials that contribute to department efficiency.
● Highly cognizant of the DevOps approach like ensuring basic security measures.
● Technical writing skills, strong networking, and communication style with the ability to formulate professional emails, presentations, and documents.
● Passion for technology trends in the insurance industry and emerging technology space.
Qualification and Experience
● Recognized with a Bachelor’s degree in Computer Science, Information Technology, or equivalent.
● Work experience - Overall experience 10-12 years
● Recognizable domain knowledge and awareness of basic insurance and regulatory frameworks.
● Previous experience working in the insurance industry (AINS Certification is a plus).
Job Title : Backend Developer (Node.js or Python/Django)
Experience : 2 to 5 Years
Location : Connaught Place, Delhi (Work From Office)
Job Summary :
We are looking for a skilled and motivated Backend Developer (Node.js or Python/Django) to join our in-house engineering team.
Key Responsibilities :
- Design, develop, test, and maintain robust backend systems using Node.js or Python/Django.
- Build and integrate RESTful APIs including third-party Authentication APIs (OAuth, JWT, etc.).
- Work with data stores like Redis and Elasticsearch to support caching and search features.
- Collaborate with frontend developers, product managers, and QA teams to deliver complete solutions.
- Ensure code quality, maintainability, and performance optimization.
- Write clean, scalable, and well-documented code.
- Participate in code reviews and contribute to team best practices.
Required Skills :
- 2 to 5 Years of hands-on experience in backend development.
- Proficiency in Node.js and/or Python (Django framework).
- Solid understanding and experience with Authentication APIs.
- Experience with Redis and Elasticsearch for caching and full-text search.
- Strong knowledge of REST API design and best practices.
- Experience working with relational and/or NoSQL databases.
- Must have completed at least 2 end-to-end backend projects.
Nice to Have :
- Experience with Docker or containerized environments.
- Familiarity with CI/CD pipelines and DevOps workflows.
- Exposure to cloud platforms like AWS, GCP, or Azure.
🚀 We’re Hiring! | AI/ML Engineer – Computer Vision
📍 Location: Noida | 🕘 Full-Time
🔍 What We’re Looking For:
• 4+ years in AI/ML (Computer Vision)
• Python, OpenCV, TensorFlow, PyTorch, etc.
• Hands-on with object detection, face recognition, classification
• Git, Docker, Linux experience
• Curious, driven, and ready to build impactful products
💡 Be part of a fast-growing team, build products used by brands like Biba, Zivame, Costa Coffee & more!
















