Cutshort logo

50+ AWS (Amazon Web Services) Jobs in India

Apply to 50+ AWS (Amazon Web Services) Jobs on CutShort.io. Find your next job, effortlessly. Browse AWS (Amazon Web Services) Jobs and apply today!

icon
Inflectionio

at Inflectionio

1 candid answer
Renu Philip
Posted by Renu Philip
Bengaluru (Bangalore)
3 - 5 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
skill iconKubernetes
skill iconJenkins
Chef
CI/CD
+6 more

We are looking for a DevOps Engineer with hands-on experience in managing production infrastructure using AWS, Kubernetes, and Terraform. The ideal candidate will have exposure to CI/CD tools and queueing systems, along with a strong ability to automate and optimize workflows.


Responsibilities: 

* Manage and optimize production infrastructure on AWS, ensuring scalability and reliability.

* Deploy and orchestrate containerized applications using Kubernetes.

* Implement and maintain infrastructure as code (IaC) using Terraform.

* Set up and manage CI/CD pipelines using tools like Jenkins or Chef to streamline deployment processes.

* Troubleshoot and resolve infrastructure issues to ensure high availability and performance.

* Collaborate with cross-functional teams to define technical requirements and deliver solutions.

* Nice-to-have: Manage queueing systems like Amazon SQS, Kafka, or RabbitMQ.



Requirements: 

* 2+ years of experience with AWS, including practical exposure to its services in production environments.

* Demonstrated expertise in Kubernetes for container orchestration.

* Proficiency in using Terraform for managing infrastructure as code.

* Exposure to at least one CI/CD tool, such as Jenkins or Chef.

* Nice-to-have: Experience managing queueing systems like SQS, Kafka, or RabbitMQ.

Read more
AI Industry

AI Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Mumbai, Bengaluru (Bangalore), Hyderabad, Gurugram
6 - 10 yrs
₹32L - ₹42L / yr
ETL
SQL
Google Cloud Platform (GCP)
Data engineering
ELT
+17 more

Role & Responsibilities:

We are looking for a strong Data Engineer to join our growing team. The ideal candidate brings solid ETL fundamentals, hands-on pipeline experience, and cloud platform proficiency — with a preference for GCP / BigQuery expertise.


Responsibilities:

  • Design, build, and maintain scalable data pipelines and ETL/ELT workflows
  • Work with Dataform or DBT to implement transformation logic and data models
  • Develop and optimize data solutions on GCP (BigQuery, GCS) or AWS/Azure
  • Support data migration initiatives and data mesh architecture patterns
  • Collaborate with analysts, scientists, and business stakeholders to deliver reliable data products
  • Apply data governance and quality best practices across the data lifecycle
  • Troubleshoot pipeline issues and drive proactive monitoring and resolution


Ideal Candidate:

  • Strong Data Engineer Profile
  • Must have 6+ years of hands-on experience in Data Engineering, with strong ownership of end-to-end data pipeline development.
  • Must have strong experience in ETL/ELT pipeline design, transformation logic, and data workflow orchestration.
  • Must have hands-on experience with any one of the following: Dataform, dbt, or BigQuery, with practical exposure to data transformation, modeling, or cloud data warehousing.
  • Must have working experience on any cloud platform: GCP (preferred), AWS, or Azure, including object storage (GCS, S3, ADLS).
  • Must have strong SQL skills with experience in writing complex queries and optimizing performance.
  • Must have programming experience in Python and/or SQL for data processing.
  • Must have experience in building and maintaining scalable data pipelines and troubleshooting data issues.
  • Exposure to data migration projects and/or data mesh architecture concepts.
  • Experience with Spark / PySpark or large-scale data processing frameworks.
  • Experience working in product-based companies or data-driven environments.
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.


NOTE:

  • There will be an interview drive scheduled on 28th and 29th March 2026, and if shortlisted, they will be expected to be available on these Interview dates. Only Immediate joiners are considered.
Read more
REConnect Energy

at REConnect Energy

4 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
skill iconJava
skill iconMongoDB
skill iconPostgreSQL
skill iconAmazon Web Services (AWS)

About Us:

REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.


We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI. 


Responsibilities:

● Engineering - Take complete ownership of engineering stacks. Define and maintain software systems architecture for high availability 24x7 systems.

● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.

● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.

● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime. 


Requirements:

● Bachelor's or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent

● Proficient in python programming along with Frameworks like Django/Fast Api/Flask and Java frameworks like Spring, Hibernate, SpringBoot, etc

● Debug and resolve technical issues that arise during the development or after deployment at various stages.

● Experience in databases including MySQL and NoSQL

● Experience in designing, developing and maintaining high availability systems.

● Experience in MVC pattern, Tomcat, Git, and Jira.

● Experience working with AWS cloud platform.

● 4-5 years of experience building highly available systems

● 2-3 years experience leading a team of engineers and analysts

● Strong analytical and data driven approach to problem solving

Read more
Mango Sciences
Remote only
7 - 12 yrs
₹20L - ₹40L / yr
skill iconPython
SQL
ETL
Data pipeline
Datawarehousing
+12 more

The Mission: We are looking for a visionary Technical Leader to own our healthcare data ecosystem from the first byte to the final dashboard. You won't just be managing a platform; you’ll be the primary architect of a clinical data engine that powers life-changing analytics. If you are an expert in SQL and Python who thrives on solving the "puzzle" of healthcare interoperability (FHIR/HL7) while mentoring a high-performing team, this is your seat at the table.

What You’ll Own

  • Architectural Sovereignty: Define the end-to-end blueprint for our data warehouse (staging, marts, and semantic layers). You choose the frameworks, set the coding standards, and decide how we handle complex dimensional modeling and SCDs.
  • Engineering Excellence: Lead by example. You’ll write production-grade Python for ingestion frameworks and craft advanced, set-based SQL transformations that others use as gold-standard references.
  • The Interoperability Bridge: Turn the chaos of EHR exports, REST APIs, and claims data into clean, FHIR-aligned governed datasets. You ensure our data speaks the language of modern healthcare.
  • Technical Mentorship: Act as the "Engineer’s Engineer." You’ll run design reviews, champion CI/CD best practices, and build the runbooks that keep our small but mighty team efficient.
  • Security by Design: Direct the implementation of HIPAA-compliant data flows, ensuring encryption, auditability, and access controls are baked into the architecture, not bolted on.

The Stack You’ll Command

  • Languages: Expert-level SQL (CTE, Window Functions, Tuning) and Production Python.
  • Databases: Deep polyglot experience across MSSQL, PostgreSQL, Oracle, and NoSQL (MongoDB/Elasticsearch).
  • Orchestration: Advanced Apache Airflow (SLAs, retries, and complex DAGs).
  • Ecosystem: GitHub for CI/CD, Tableau/PowerBI for semantic layers, and Unix/Linux for shell scripting.

Who You Are

  • Experienced: You have 8–12+ years in data engineering, with a significant portion spent in a Lead or Architect capacity.
  • Healthcare-Fluent: You understand the stakes of PHI. You’ve worked with FHIR/HL7 and know how to map clinical resources to analytical models.
  • Performance-Obsessed: You don’t just make it work; you make it fast. You’re the person who uses EXPLAIN/ANALYZE to shave minutes off a query.
  • Culture-Builder: You believe in documentation, observability (lineage/freshness), and "leaving the campground cleaner than you found it."

Bonus Points for:

  • Privacy Pro: Experience with PII/PHI de-identification and privacy-by-design.
  • Cloud Native: Deep familiarity with Azure, AWS, or GCP security and data services.
  • Search Experts: Experience with near-real-time indexing via Elasticsearch.

To process your resume for the next process, please fill out the Google form with your updated resume.

 

Pre-screen Question: https://forms.gle/q3CzfdSiWoXTCEZJ7

 

Details: https://forms.gle/FGgkmQvLnS8tJqo5A

Read more
Cambridge Wealth (Baker Street Fintech)
Sangeeta Bhagwat
Posted by Sangeeta Bhagwat
Pune
0 - 2 yrs
₹1.9L - ₹4L / yr
SQL
skill iconPython
skill iconAmazon Web Services (AWS)
Spotfire
Qlikview
+12 more

Who are we aka "About Us":

 

We are an early-stage Fintech Startup - working on exciting Fintech Products for some of the Top 5 Global Banks and building our own. If you are looking for a place where you can make a mark and not just be a cog in the wheel, Baker street Fintech Pvt Ltd (Parent Company) might be the place for you. We have a flat, ownership-oriented culture, and deliver world-class quality. You will be working with a founding team that has delivered over 26 industry-leading product experiences and won the Webby awards for Digital Strategy. In short, a bleeding edge team. 

 

As Cambridge Wealth, we are well-established in the wealth and mutual fund distribution segment, having won awards from BSE Star as well as Mutual Fund houses. Our UHNI/HNI/NRI clients include renowned professionals from various industries. 

 

What are we looking for a.k.a “The JD” :

 

We are seeking a skilled and detail-oriented Data Analyst to join our product team. As a Data Analyst, you will play a crucial role in extracting, analysing, and interpreting complex financial data to drive strategic decision-making and optimize our data solutions. The ideal candidate should possess a strong foundation in SQL / NoSQL databases, Python programming, and proficiency in tools like PostgreSQL and Excel. A deep understanding of financial concepts is also a plus. Additionally, having an interest in business intelligence tools and machine learning will be valuable for this role.

 

Responsibilities:

  • Proficient in writing complex SQL Queries
  • Utilize Python for data manipulation, analysis, and visualisation, using libraries such as pandas, matplotlib, psycopg etc.
  • Perform database optimization, indexing, and query tuning to ensure high performance.
  • Monitor and maintain data quality, troubleshoot data-related issues, and implement solutions to optimize data integrity and performance.
  • Design, configure, and maintain PostgreSQL databases
  • Set up and manage database clusters, replication, and backups for disaster recovery

 

Preferred Qualifications:

  • Intermediate-level Excel skills for data analysis and reporting.
  • Strong communication skills to present findings effectively and recommendations to both technical and non-technical stakeholders.
  • Detail-oriented mindset with a commitment to data accuracy and quality.

 

*(Only Applicants who have finished their educational commitments are requested to apply)

 

Not sure whether you should apply? Here's a quick checklist to make things easier. You are someone who:

  • Has worked (0-1.5 years preferably) or is looking to work specifically with an early-stage startup.
  • You are ready to be a part of a Zero To One Journey which implies that you shall be involved in building fintech products and process from the ground up.
  • You are comfortable to work in an unstructured environment with a small team where you decide what your day looks like and take initiative to take up the right piece of work, own it and work with the founding team on it.
  • This is not an environment where someone will be checking up on you every few hours. It is up to you to schedule check-ins whenever you find the need to, else we assume you are progressing well with your tasks. You will be expected to find solutions to problems and suggest improvements.
  • You want complete ownership for your role & be able to drive it the way you think is right.
  • You can be a self-starter and take ownership of deliverables to develop a consensus with the team on approach and methods and deliver to them.
  • Are looking to stick around for the long term and grow with the company.

 

Read more
Remote only
0 - 1 yrs
₹1L - ₹2L / yr
Information Technology
IT infrastructure
Troubleshooting
Disaster recovery
Active Directory
+9 more

Job Description

Position Title: IT Intern (Full Time)

Department: Information Technology

Work Mode: Work From Home (WFH)

Educational Qualification: B.Tech (IT) / M.Tech (IT)

Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)


Role Summary

The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.


Key Responsibilities

  • Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.
  • Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.
  • Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.
  • Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.
  • Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.
  • Create, update, and maintain technical documentation, SOPs, and knowledge base articles.
  • Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.
  • Adhere to company IT policies, data security standards, and confidentiality requirements.


Required Skills & Competencies

  • Basic understanding of IT infrastructure, networking concepts, and operating systems
  • Familiarity with cloud platforms such as AWS and/or Microsoft Azure
  • Fundamental knowledge of Active Directory and user access management
  • Strong willingness to learn and adapt to new technologies
  • Good analytical, problem-solving, and communication skills
  • Ability to work independently in a remote environment


Technical Requirements

  • Personal laptop/desktop with required specifications
  • Reliable internet connectivity to support remote work


Learning & Development Opportunities

  • Hands-on exposure to enterprise IT environments
  • Practical experience with cloud technologies and infrastructure support
  • Mentorship from experienced IT professionals
  • Opportunity to develop technical, documentation, and operational skills


Read more
Planview

at Planview

3 candid answers
3 recruiters
Bisman Gill
Posted by Bisman Gill
Bengaluru (Bangalore)
12yrs+
Upto ₹65L / yr (Varies
)
Linux/Unix
Microsoft Windows
Virtualization
Operating systems
Computer Networking
+9 more

 Company Overview:

     Planview has one mission: to build the future of connected work with market-leading portfolio management and work management solutions. Planview is a recognized innovator and industry leader, our solutions enable organizations to connect the business from ideas to impact, empowering companies to accelerate the achievement of what matters most. Our solutions span every class of work, resource, and organization to address the varying needs of diverse and distributed teams, departments, and enterprises.


As a Sr CloudOps Engineer II, you will oversee teams of Engineers and be a champion for configuration management, technologies in the cloud, and continuous improvement. You will work closely with global leaders to ensure that our applications, infrastructure, and processes are scalable, secure, and supportable. By leveraging your production experience and development skills you will work hand in hand with Engineers (Dev, DevOps, DBOps) to design and implement solutions that improve delivery of value to customers, reduce costs, and eliminate toil.


     Responsibilities (What you will do):

  •      Guide the professional development of Engineers and support the teams to accomplish business goals
  • Work closely with leaders in the Israel to align on priorities and architect, deliver, and manage our products
  • Build systems that are secure, scalable, and self-healing.
  • Manage and improve deployment pipelines.
  • Triage and remediate production issues.
  • Participate in on-call rotations for escalations.







Qualifications (What you will bring):

  •      Bachelor's degree is CS or equivalent experience in related field.
  • 2+ years managing Engineering teams.
  • 8+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment
  • 5+ years administering Linux and Windows environments.
  • 3+ years programming / scripting experience (e.g., Python, JavaScript, PowerShell)
  • Strong technical knowledge in OS’s (Linux and Windows), virtualizations, storage systems, networking, and firewall implementations
  • Maintaining production environments in the On Premise (90%) and Cloud (10%) (e.g., AWS, Google Cloud, Azure)
  • Solid understanding of networking principles and how it applies to data flow and security.
  • Automating deployments of cloud based available services (e.g., AWS EC2 / RDS, Docker, Kubernetes)
  • Experience managing CI/CD infrastructures, with a strong proficiency in platforms like bitbucket and Jenkins to streamline deployment pipelines and ensure efficient software delivery.
  • Management of resources using Infrastructure as Code tools (e.g., CloudFormation, Terraform, Chef)
  • Knowledge of observability tools such as LogicMonitor, New Relic, Prometheus, and Coralogix, as well as their implementation.
  • Worked within Agile and Lean software development teams.
  • Experience working in globally distributed teams.
  • Ability to look on the big picture and manage risks.


Read more
Technology Industry

Technology Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Kochi (Cochin)
5 - 8 yrs
₹12L - ₹27L / yr
Snowflake
Metabase
skill iconMongoDB
Data Pipelines
skill iconAmazon Web Services (AWS)
+4 more

Job Description & Specification: 

Post Title: Data Engineer

Work Mode: Kochi Onsite - UK Time zone


Role Overview: 

We are seeking a talented and experienced Data Engineer to join our team. The ideal candidate will have expertise in technologies such as Metabases, Dbt, Stitch, Snowflake, Avo, and MongoDB. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics and data-driven decision-making processes.


Responsibilities:

  • Designing, developing and implementing scalable data pipelines and ETL processes using tools such as Stitch and Dbt to ingest, transform, and load data from various sources into our data warehouse (Snowflake).
  • Implement data modeling best practices and standards using Dbt to create and manage data models for reporting and analytics.
  • Collaborating with cross-functional teams to understand data requirements and deliver solutions that meet business needs.
  • Develop and maintain dashboards and visualizations in Metabases to enable self-service analytics and data exploration for internal teams.
  • Building and optimizing ETL processes to ensure data quality and integrity.
  • Optimizing data processing and storage solutions for performance, scalability and reliability, leveraging cloud-based technologies.
  • Implementing monitoring and alerting systems to proactively identify and address data issues.
  • Implementing data quality checks and monitoring processes to ensure the accuracy, completeness, and integrity of data.
  • Managing and optimizing databases (like MongoDB for performance and scalability).
  • Developing and maintaining documentation, best practices, and standards for data engineering processes and workflows.
  • Stay up to date with emerging technologies and trends in data engineering, machine learning, and analytics, and evaluate their potential impact on data strategy and architecture.


Requirements:

  • Bachelor's or Master's degree in Computer Science.
  • Minimum of 4 years of experience working as a data engineer with expertise in Metabases, Dbt, Stitch, Snowflake, Avo, MongoDB.
  • Strong programming skills in languages like Python, and experience with SQL and database technologies (e.g., PostgreSQL, MySQL, MongoDB).
  • Hands-on experience with data integration tools (e.g., Stitch), data modeling tools (e.g., Dbt), and BI platforms (e.g., Metabases).
  • Experience with cloud platforms such as AWS.
  • Strong understanding of data modeling concepts, database design, and data warehousing principles
  • Experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka) and cloud-based data platforms (e.g., AWS EMR, Azure Databricks, Google BigQuery).
  • Familiarity with data integration tools, ETL processes, and workflow orchestration tools (e.g., Apache Airflow, Apache NiFi).
  • Excellent problem-solving skills and attention to detail.
  • Strong communication skills with the ability to work effectively in a global team environment.
  • Experience in the education or Edtech industry is a plus.
  • Knowledge of Avo for schema management and versioning will be an added advantage.
  • Familiarity with machine learning algorithms, data science workflows, and analytics tools (e.g., TensorFlow, PyTorch, scikit-learn, Tableau).
  • Knowledge of distributed computing concepts and containerization technologies.
  • Experience with version control systems (e.g., Git) and CI/CD pipelines.
  • Certifications in cloud computing (e.g., AWS Certified Developer, Google Cloud Professional Data Engineer) or data engineering (e.g., Databricks Certified Associate Developer) are desirable.


Benefits:

  • Competitive salary and bonus structure based on performance and achievement of goals.
  • Comprehensive benefits package including medical insurance.


Join us in shaping the future of technology by applying your expertise as a Data Engineer. If you are passionate about driving innovation and delivering impactful solutions, we invite you to be part of our dynamic team. Apply now!!

Read more
SDS softwares

at SDS softwares

2 candid answers
1 recruiter
Tanavee Sharma
Posted by Tanavee Sharma
Remote only
1 - 2 yrs
₹1.2L - ₹2.2L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconExpress
skill iconMongoDB
skill iconHTML/CSS
+11 more

💼 Job Title: Full Stack Developer (experienced only)

🏢 Company: SDS Softwares

💻 Location: Work from Home

💸 Salary range: ₹10,000 - ₹18,000 per month (based on knowledge and interview)

🕛 Shift Timings: 12 PM to 9 PM (5 days working )


About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.


⚜️ Key Responsibilities:

- Collaborate with cross-functional teams to define, design, and ship new features.

- Develop and maintain high-quality web applications (frontend + backend )

- Troubleshoot and debug applications to ensure peak performance.

- Participate in code reviews and contribute to the team’s knowledge base.


⚜️ Required Skills:

- Proficiency in HTML, CSS, JavaScript, Redux, React.js for front-end development. ✅

- Understanding of server-side languages such as Node.js.

- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.

- Basic knowledge of version control systems, particularly Git.

- Strong problem-solving skills and attention to detail.

- Excellent communication skills and a team-oriented mindset.


💠 Qualifications:

- individuals with full-time work experience (1 year to 2 years) in software development.

- Must have a personal laptop and stable internet connection.

- Ability to join immediately is preferred.


If you are passionate about coding and eager to learn, we would love to hear from you. 👍

Read more
Deltek
Sri Priyanka
Posted by Sri Priyanka
Remote, Bengaluru (Bangalore)
9 - 15 yrs
Best in industry
skill iconJava
Microservices
skill iconAmazon Web Services (AWS)
Artificial Intelligence (AI)


Advanced Software Architect

Position Responsibilities :


  • Lead the architecture and development of AI-powered, distributed systems that meet enterprise-grade performance and security standards.
  • Leverage AI tools for code generation, architectural design, and documentation to accelerate delivery and improve quality.
  • Design, build, and maintain services using Python, Java, and Node.js, following clean-code and secure design principles.
  • Develop agentic AI-based tools, domain-specific copilots, and developer productivity enhancements.
  • Collaborate with cross-functional teams to define modular, scalable, and compliant architecture patterns.
  • Conduct technical design reviews and produce detailed documentation, including system specifications, API docs, and architecture diagrams.
  • Integrate AI solutions into CI/CD pipelines, ensuring observability, automated testing, and deployment standards are met.
  • Implement robust monitoring and performance engineering practices to maintain high-quality deployments.
  • Continuously evaluate emerging AI technologies and integrate them into development workflows for maximum impact.
  • Champion best practices in security, automation, and performance optimization across the organization.


Qualifications :


  • 8+ years in software engineering with full-stack or backend development in Python, Java, and/or Node.js.
  • 3+ years with AI tools for development, prototyping, or documentation tasks.
  • Experience with cloud-native development and containerized deployment (Docker, Kubernetes).
  • Knowledge of AI integration patterns, vector stores, prompt engineering, and RAG pipelines.
  • Ability to design software architecture using sequence diagrams, ERDs, data models, and threat models.
  • Comfortable with Gen AI-first environments and working with remote Agile teams.

Preferred Qualifications

  • Experience building AI copilots or developer tools using OpenAI/Claude SDKs, LangChain, or similar frameworks.
  • Experience working in a fast-paced, AIDLC environment, with a strong understanding of CI/CD practices.
  • Familiarity with GitHub Actions, Argo Workflows, Terraform, and monitoring/observability tools.
  • Containerization and Orchestration: Proficiency in Docker and Kubernetes for containerization and orchestration.
  • Cloud Platforms: Experience with cloud computing platforms such as AWS, Azure, or OCI Cloud.
Read more
Quantiphi

at Quantiphi

3 candid answers
1 video
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
4 - 12 yrs
Upto ₹45L / yr (Varies
)
MLOps
skill iconAmazon Web Services (AWS)
Windows Azure
LLMops
databricks

We are seeking a skilled and passionate ML Engineer with 3+ years of experience to join our team. The ideal candidate will be instrumental in developing, deploying, and maintaining machine learning models, with a strong focus on MLOps practices.

This role requires hands-on experience with Azure cloud services, Databricks, and MLflow to build robust and scalable ML solutions.


Responsibilities

  • Design, develop, and implement machine learning models and algorithms to solve complex business problems.
  • Collaborate with data scientists to transition models from research and development into production-ready systems.
  • Build and maintain scalable data pipelines for ML model training and inference using Databricks.
  • Implement and manage the ML model lifecycle using MLflow, including experiment tracking, model versioning, and model registry.
  • Deploy and manage ML models in production environments on Azure, leveraging services such as:
  • Azure Machine Learning
  • Azure Kubernetes Service (AKS)
  • Azure Functions
  • Support MLOps workloads by automating model training, evaluation, deployment, and monitoring processes.
  • Ensure the reliability, performance, and scalability of ML systems in production.
  • Monitor model performance, detect model drift, and implement retraining strategies.
  • Collaborate with DevOps and Data Engineering teams to integrate ML solutions into existing infrastructure and CI/CD pipelines.
  • Document model architecture, data flows, and operational procedures.

Qualifications

Education

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Statistics, or a related quantitative field.

Experience

  • Minimum 3+ years of professional experience as an ML Engineer or in a similar role.

Required Skills

  • Strong proficiency in Python for data manipulation, machine learning, and scripting.
  • Hands-on experience with machine learning frameworks, such as:
  • Scikit-learn
  • TensorFlow
  • PyTorch
  • Keras
  • Demonstrated experience with MLflow for:
  • Experiment tracking
  • Model management
  • Model deployment
  • Proven experience working with Microsoft Azure cloud services, specifically:
  • Azure Machine Learning
  • Azure Databricks
  • Related compute and storage services
  • Solid experience with Databricks for:
  • Data processing
  • ETL pipelines
  • ML model development
  • Strong understanding of MLOps principles and practices, including:
  • CI/CD for ML
  • Model versioning
  • Model monitoring
  • Model retraining
  • Experience with containerization and orchestration technologies, including:
  • Docker
  • Kubernetes (especially AKS)
  • Familiarity with SQL and data warehousing concepts.
  • Experience working with large datasets and distributed computing frameworks.
  • Strong problem-solving skills and attention to detail.
  • Excellent communication and collaboration skills.

Nice-to-Have Skills

  • Experience with other cloud platforms (AWS or GCP).
  • Knowledge of big data technologies such as Apache Spark.
  • Experience with Azure DevOps for CI/CD pipelines.
  • Familiarity with real-time inference patterns and streaming data.
  • Understanding of Responsible AI principles, including fairness, explainability, and privacy.

Certifications (Preferred)

  • Microsoft Certified: Azure AI Engineer Associate
  • Databricks Certified Machine Learning Associate (or higher) 
Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
5 - 7 yrs
Best in industry
skill iconJava
Selenium
Selenium Web driver
CI/CD
Appium
+11 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are seeking a highly skilled QA Automation Engineer with strong expertise in Java and Selenium to join our growing engineering team. The ideal candidate will play a key role in designing, developing, and maintaining scalable test automation frameworks while ensuring high product quality across releases.


Roles and Responsibilities:

● Design, develop, and maintain robust automation frameworks using Java and Selenium

● Build automated test scripts for web applications and integrate them into CI CD pipelines

● Collaborate closely with developers, product managers, and business analysts to understand requirements and define effective test strategies

● Participate in sprint planning, requirement reviews, and technical discussions

● Perform root cause analysis for defects and work with engineering teams for resolution

● Improve automation coverage and reduce manual regression effort

● Ensure test environments, test data, and execution reports are maintained and documented

● Mentor junior QA engineers and promote best practices in automation

● Develop, execute, and maintain comprehensive test plans and test cases for manual and automated testing

● Perform functional, regression, performance, and security testing to ensure software quality

● Design and develop automated test scripts using tools such as Selenium, Appium, or similar frameworks

● Identify, document, and track software defects, working closely with development teams for resolution

● Ensure test coverage by working closely with developers, product managers, and other stakeholders

● Establish and maintain continuous integration (CI) and continuous deployment (CD) pipelines for test automation

● Conduct API testing using tools like Postman or RestAssured

● Collaborate with cross-functional teams to enhance the overall quality of the product

● Stay up to date with the latest industry trends and best practices in QA methodologies and automation frameworks


Requirements:

● 5 to 7 years of experience in QA automation

● Strong hands-on experience with Java and Selenium WebDriver

● Experience in building or enhancing automation frameworks from scratch

● Good understanding of TestNG or JUnit

● Experience with Maven or Gradle

● Familiarity with CI CD tools such as Jenkins, GitHub Actions, or similar

● Strong understanding of Agile Scrum methodology

● Experience with API testing tools such as Rest Assured or Postman is a plus

● Knowledge of version control systems like Git

● Strong analytical and problem-solving skills

● Strong understanding of software testing life cycle (STLC) and defect lifecycle management

● Experience with version control systems (e.g., Git)

● Relevant certifications in software testing (e.g., ISTQB) are desirable but not required

● Solid understanding of software testing principles, methodologies, and techniques

● Excellent analytical and problem-solving skills

● Strong attention to detail and a commitment to delivering high-quality software

● Good communication and collaboration skills, with the ability to work effectively in a team environment


Good to Have:

● Experience with performance testing tools

● Exposure to cloud platforms such as AWS or Azure

● Knowledge of containerization tools like Docker

● Experience in BDD frameworks such as Cucumber.


Why Join Us?

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
6 - 10 yrs
Best in industry
skill iconSpring Boot
Spring MVC
MVC Framework
skill iconJava
Hibernate (Java)
+10 more

Company Description

NonStop io Technologies, founded in August 2015, is a Bespoke Engineering Studio specializing in Product Development. With over 80 satisfied clients worldwide, we serve startups and enterprises across prominent technology hubs, including San Francisco, New York, Houston, Seattle, London, Pune, and Tokyo. Our experienced team brings over 10 years of expertise in building web and mobile products across multiple industries. Our work is grounded in empathy, creativity, collaboration, and clean code, striving to build products that matter and foster an environment of accountability and collaboration.


Role Description

This is a full-time hybrid role for a Java Software Engineer, based in Pune. The Java Software Engineer will be responsible for designing, developing, and maintaining software applications. Key responsibilities include working with microservices architecture, implementing and managing the Spring Framework, and programming in Java. Collaboration with cross-functional teams to define, design, and ship new features is also a key aspect of this role.


Responsibilities:

● Develop and Maintain: Write clean, efficient, and maintainable code for Java-based applications 

● Collaborate: Work with cross-functional teams to gather requirements and translate them into technical solutions 

● Code Reviews: Participate in code reviews to maintain high-quality standards 

● Troubleshooting: Debug and resolve application issues in a timely manner 

● Testing: Develop and execute unit and integration tests to ensure software reliability

● Optimize: Identify and address performance bottlenecks to enhance application performance 


Qualifications & Skills:

● Strong knowledge of Java, Spring Framework (Spring Boot, Spring MVC), and Hibernate/JPA 

● Familiarity with RESTful APIs and web services 

● Proficiency in working with relational databases like MySQL or PostgreSQL 

● Practical experience with AWS cloud services and building scalable, microservices-based architectures

● Experience with build tools like Maven or Gradle 

● Understanding of version control systems, especially Git 

● Strong understanding of object-oriented programming principles and design patterns 

● Familiarity with automated testing frameworks and methodologies 

● Excellent problem-solving skills and attention to detail 

● Strong communication skills and ability to work effectively in a collaborative team environment 


Why Join Us? 

● Opportunity to work on cutting-edge technology products 

● A collaborative and learning-driven environment 

● Exposure to AI and software engineering innovations 

● Excellent work ethic and culture 


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you

Read more
NonStop io Technologies Pvt Ltd
Kalyani Wadnere
Posted by Kalyani Wadnere
Pune
4 - 7 yrs
Best in industry
DevOps
skill iconAmazon Web Services (AWS)
Terraform
Windows Azure
Google Cloud Platform (GCP)
+9 more

About NonStop io Technologies:

NonStop io Technologies is a value-driven company with a strong focus on process-oriented software engineering. We specialize in Product Development and have a decade's worth of experience in building web and mobile applications across various domains. NonStop io Technologies follows core principles that guide its operations and believes in staying invested in a product's vision for the long term. We are a small but proud group of individuals who believe in the 'givers gain' philosophy and strive to provide value in order to seek value. We are committed to and specialize in building cutting-edge technology products and serving as trusted technology partners for startups and enterprises. We pride ourselves on fostering innovation, learning, and community engagement. Join us to work on impactful projects in a collaborative and vibrant environment.


Brief Description:

We are looking for a skilled and proactive DevOps Engineer to join our growing engineering team. The ideal candidate will have hands-on experience in building, automating, and managing scalable infrastructure and CI CD pipelines. You will work closely with development, QA, and product teams to ensure reliable deployments, performance, and system security.


Roles and Responsibilities:

● Design, implement, and manage CI CD pipelines for multiple environments

● Automate infrastructure provisioning using Infrastructure as Code tools

● Manage and optimize cloud infrastructure on AWS, Azure, or GCP

● Monitor system performance, availability, and security

● Implement logging, monitoring, and alerting solutions

● Collaborate with development teams to streamline release processes

● Troubleshoot production issues and ensure high availability

● Implement containerization and orchestration solutions such as Docker and Kubernetes

● Enforce DevOps best practices across the engineering lifecycle

● Ensure security compliance and data protection standards are maintained


Requirements:

● 4 to 7 years of experience in DevOps or Site Reliability Engineering

● Strong experience with cloud platforms such as AWS, Azure, or GCP - Relevant Certifications will be a great advantage

● Hands-on experience with CI CD tools like Jenkins, GitHub Actions, GitLab CI, or Azure DevOps

● Experience working in microservices architecture

● Exposure to DevSecOps practices

● Experience in cost optimization and performance tuning in cloud environments

● Experience with Infrastructure as Code tools such as Terraform, CloudFormation, or ARM

● Strong knowledge of containerization using Docker

● Experience with Kubernetes in production environments

● Good understanding of Linux systems and shell scripting

● Experience with monitoring tools such as Prometheus, Grafana, ELK, or Datadog

● Strong troubleshooting and debugging skills

● Understanding of networking concepts and security best practices


Why Join Us?

● Opportunity to work on a cutting-edge healthcare product

● A collaborative and learning-driven environment

● Exposure to AI and software engineering innovations

● Excellent work ethic and culture


If you're passionate about technology and want to work on impactful projects, we'd love to hear from you!

Read more
Recruiting Bond

at Recruiting Bond

2 candid answers
Pavan Kumar
Posted by Pavan Kumar
Bengaluru (Bangalore), Mumbai
10 - 16 yrs
₹75L - ₹130L / yr
Distributed Systems
Microservices
Enterprise architecture
System Design & Architecture
Event-Driven Architecture
+29 more

🚨 We’re Building a “Top 1% Engineering Org”


We’re building a high-talent-density, AI-first R&D organization from scratch — inside a publicly listed company undergoing a full-scale transformation.

Think:

→ Rewriting legacy systems into AI-native architectures

→ Embedding LLMs + Agentic AI into core workflows

→ Reimagining platforms, infra, and data systems for the next decade

This is the kind of shift you’d expect from Google, Microsoft, or Meta —

Except you get to build it from day 0 → scale it globally.


About the Role / Team

We are building a next-generation AI-first R&D organization in Bengaluru, focused on solving complex problems across LLMs, Agentic AI systems, distributed computing, and enterprise-scale architectures.


This initiative is part of a publicly listed global company investing heavily in AI-driven transformation, re-architecting its platforms into intelligent, autonomous systems powered by large language models, workflows, and decision engines.


You will be working on:

  • Agentic AI systems & LLM-powered workflows
  • Distributed, scalable backend systems
  • Enterprise-grade AI platforms
  • Automation-first engineering environments


🚀 The Mandate

Own and evolve the technical backbone of an AI-first enterprise platform.


You will define architecture across LLM-powered systems, distributed services, and data platforms — and lead critical transformations from legacy → AI-native systems.


🧩 What You’ll Do

  • Architect large-scale distributed systems powering AI-driven workflows
  • Lead 0→1 and 1→N platform builds (LLM integrations, agentic systems, orchestration layers)
  • Redesign legacy systems into scalable, modular, AI-native architectures
  • Drive system design excellence across teams (APIs, infra, observability, reliability)
  • Make high-stakes decisions on trade-offs (latency, cost, scalability, model performance)
  • Mentor senior engineers and influence engineering culture/org standards
  • Partner with product, data, and leadership on long-term technical strategy


🧠 What We’re Looking For

  • Proven track record building high-scale backend or platform systems
  • Deep expertise in distributed systems, microservices, cloud (AWS/GCP/Azure)
  • Strong exposure to data systems/infra / Data / real-time architectures
  • Experience or strong interest in LLMs, GenAI, or AI system design
  • Exceptional system design, abstraction, and problem-solving ability
  • High ownership mindset — you think in terms of systems, not tickets
  • Strong coding skills in Python / Java / Go / Node.js
  • Solid understanding of data structures, system design basics, and backend architecture
  • Experience building scalable APIs and services
  • Familiarity or curiosity around AI/LLMs, async systems, or event-driven design
  • Strong debugging, problem-solving, and ownership mindset
  • Solve hard system problems (latency, scale, reliability)
  • Drive cross-team technical decisions and standards
  • Mentor senior engineers and influence org-wide architecture 
  • Design large-scale distributed systems and backend platforms
  • Mentorship & Technical Leadership 
  • Expertise in system design, scalability, and performance optimization


Nice to Have

  • Experience integrating LLMs, vector databases, or AI pipelines
  • Contributions to architecture at scale
  • Experience with Agentic AI / LLM orchestration frameworks
  • Background in product engineering or platform companies
  • Exposure to global-scale systems (millions of users / high throughput)


🔥 What Sets You Apart

  • Built platforms used by millions of users / high-throughput systems
  • Experience with event-driven systems, stream processing, or infra platforms
  • Prior work on AI/ML platforms, model serving, or intelligent systems
Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
9 - 14 yrs
₹50L - ₹65L / yr
skill iconNodeJS (Node.js)
skill iconPython
skill iconAmazon Web Services (AWS)
TypeScript
skill iconMongoDB
+25 more

Job Details

Job Title: Director of Engineering

Industry: SAAS

Function – Information Technology

Experience Required: 9-14 years

Working Days: 6 days

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: TypeScript, AWS, NodeJS, mongodb, React.js, WebGL, Three.js, AI/ML, Docker,nKubernetes

 

Criteria

Candidate must be having 9+ years of engineering experience, with 3u20134 years in technical leadership

Hands-on expertise with React/Next.js, Node.js/Python, and AWS.

Ability to design scalable architectures for high-performance systems.

Should have AI/ML deployment experience

Strong 3D graphics/WebGL/Three.js knowledge.

Candidates should be from SAAS/Software/IT Services based startups or scaleup companies only

 

Job Description

The Role:

Company is hiring a hands-on Director of Engineering who codes, architects systems, and builds teams. You’ll set the technical foundation, drive engineering excellence, and own the architecture of our AI, 3D, and XR platform.

This is not a pure management role - expect to spend 50–60% of your time writing code, solving deep technical problems, and owning mission-critical systems. As we scale, this role transitions into CTO, taking full ownership of technical vision and long-term strategy. 

 

What You’ll Own:

1. Technical Leadership & Architecture

● Architect company’s full-stack platform across frontend, backend, infrastructure, and AI.

● Scale core systems: VersaAI engine, rendering pipeline, AR deployment, analytics.

● Make decisions on stack, scalability patterns, architecture, and technical debt.

● Own design for high-performance 3D asset processing, real-time rendering, and ML deployment.

● Lead architectural discussions, design reviews, and set engineering standards.

 

2. Hands-On Development

● Write production-grade code across frontend, backend, APIs, and cloud infra.

● Build critical features and core system components independently.

● Debug complex systems and optimize performance end-to-end.

● Implement and optimize AI/ML pipelines for 3D generation, CV, and recognition.

● Build scalable backend services for large-scale asset processing and real-time pipelines.

● Develop WebGL/Three.js rendering and AR workflows.

 

3. Team Building & Engineering Management

● Hire and grow a team of 5–8 engineers initially (scaling to 15–20).

● Establish engineering culture, values, and best practices.

● Build career frameworks, performance systems, and growth plans.

● Conduct 1:1s, mentor engineers, and drive continuous improvement.

● Set up processes for agile execution, deployments, and incident response.

 

4. Product & Cross-Functional Collaboration

● Work with the founder and product team on roadmap, feasibility, and prioritization.

● Translate product requirements into technical execution plans.

● Collaborate with design for UX quality and technical alignment.

● Support sales and customer success with integrations and technical discussions.

● Contribute technical inputs to product strategy and customer-facing initiatives.

 

5. Engineering Operations & Infrastructure

● Own CI/CD, testing frameworks, deployments, and automation.

● Create monitoring, logging, and alerting setups for reliability.

● Manage AWS infrastructure with a focus on cost and performance.

● Build internal tools, documentation, and developer workflows.

● Ensure enterprise-grade security, compliance, and reliability.

 

Tech Stack:

1. Frontend

React.js, Next.js, TypeScript, WebGL, Three.js

2. Backend

Node.js, Python, Express/FastAPI, REST, GraphQL

3. AI/ML

PyTorch, TensorFlow, CV models, Stable Diffusion, LLMs, ML pipelines

4. 3D & Graphics

Three.js, WebGL, Babylon.js, glTF, USDZ, rendering optimization

5. Databases

PostgreSQL, MongoDB, Redis, vector databases

6. Cloud & Infra

AWS (EC2, S3, Lambda, SageMaker), Docker, Kubernetes CI/CD: GitHub Actions

Monitoring: Datadog, Sentry

 

What We’re Looking For:

1. Must-Haves

● 9+ years of engineering experience, with 3–4 years in technical leadership.

● Deep full-stack experience with strong system design fundamentals.

● Proven success building products from 0→1 in fast-paced environments.

● Hands-on expertise with React/Next.js, Node.js/Python, and AWS.

● Ability to design scalable architectures for high-performance systems.

● Strong people leadership with experience hiring and mentoring teams.

● Ready to code, review, design, and lead from the front.

● Startup mindset: fast execution, problem-solving, ownership.

 

2. Highly Desirable

● AI/ML deployment experience (CV, generative AI, 3D reconstruction).

● Strong 3D graphics/WebGL/Three.js knowledge.

● Experience with real-time systems, rendering optimizations, or large-scale pipelines.

● Background in B2B SaaS, XR, gaming, or immersive tech.

● Experience scaling engineering teams from 5 → 20+.

● Open-source contributions or technical content creation.

● Experience working closely with founders or executive leadership.

 

Why Company:

● Hard, meaningful engineering problems at the intersection of AI, 3D, XR, and web tech.

● Build from day zero – architecture, team, and culture.

● Path to CTO as the company scales.

● High autonomy to drive technical decisions.

● Direct founder collaboration on product vision.

● High ownership, high-growth environment.

● Backed by global leaders: Microsoft, Google, NVIDIA, AWS.

 

Location & Work Culture:

● Location: HSR Layout, Bengaluru

● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)

● Culture: High-intensity, high-integrity, engineering-first

● Team: Young, ambitious, technically strong

Read more
Talent Pro
Mayank choudhary
Posted by Mayank choudhary
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)

Minimum 5+ years in backend engineering with strong system design expertise

Experience building scalable systems from scratch

Expert-level proficiency in Node.js

Deep understanding of distributed systems

Strong NoSQL design skills

Hands-on AWS cloud experience

Proven leadership and mentoring capability

Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 7 yrs
₹20L - ₹25L / yr
TypeScript
skill iconNodeJS (Node.js)
skill iconJavascript
skill iconMongoDB
RESTful APIs
+20 more

Job Details

Job Title: Full Stack Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-7 years

- Working Days: 6 days

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: TypeScript, NodeJS, mongodb, RESTful APIs, React.js

 

Criteria

Candidate should have at least 4+ years of professional experience as a Full Stack Engineer

Hands-on experience with both React.js and Node.js

Solid understanding of MongoDB

Should have experience in RESTful APIs

Should be from a startup or scale up companies

Should have good experience in Typescript

Strong understanding of asynchronous programming patterns

Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

We’re looking for a Full Stack Engineer to build, scale, and maintain high-performance web applications for company’s technology platforms. This role involves working across the stack-frontend, backend, and infrastructure - using modern JavaScript-based technologies.

You’ll collaborate closely with product managers, designers, and cross-functional engineering teams to deliver scalable, secure, and user-centric solutions. This role is ideal for someone who enjoys end-to-end ownership, technical problem-solving, and working in a fast-paced startup environment.

 

What You’ll Own

1. Full Stack Development

● Design, develop, test, and deploy robust and scalable web applications.

● Build and maintain server-side logic and microservices using Node.js, Express.js, and TypeScript.

● Contribute to frontend feature development and integration.

● Participate in feature planning, estimation, and execution.

 

2. Backend & API Engineering

● Design and develop RESTful APIs and backend services.

● Implement asynchronous workflows and scalable microservice architectures.

● Ensure performance, reliability, and security of backend systems.

● Implement authentication, authorization, and data protection best practices.

 

3. Database Design & Optimization

● Design and manage MongoDB schemas using Mongoose.

● Optimize queries and database performance for scale.

● Ensure data integrity and efficient data access patterns.

 

4. Frontend Collaboration & Integration

● Collaborate with frontend developers to integrate React components and APIs seamlessly.

● Ensure responsive, high-performing application behavior.

 

5. System Design & Scalability

● Contribute to system architecture and technical design discussions.

● Design scalable, maintainable, and future-ready solutions.

● Optimize applications for speed and scalability.

 

6. Product & Cross-Functional Collaboration

● Work closely with product and design teams to deliver high-quality features in rapid iterations.

● Participate in the full development lifecycle—from concept to deployment and maintenance.

 

7. Code Quality & Best Practices

● Write clean, testable, and maintainable code.

● Follow Git-based version control and code review best practices.

● Contribute to improving engineering standards and workflows.

 

What We’re Looking For

Must-Haves

● 4+ years of professional experience as a Full Stack Engineer or similar role.

● Strong proficiency in JavaScript and TypeScript.

● Hands-on experience with Node.js and Express.js.

● Solid understanding of MongoDB and Mongoose.

● Experience building and consuming RESTful APIs and microservices.

● Strong understanding of asynchronous programming patterns.

● Good grasp of system design principles and application architecture.

● Experience with Git and version control best practices.

● Bachelor’s degree in Computer Science, Engineering, or a related field.

 

Good-to-Have / Preferred

● Frontend development experience with React.js.

● Exposure to Three.js or similar 3D/visualization libraries.

● Experience with cloud platforms (AWS, GCP, Azure – EC2, S3, Lambda).

● Knowledge of Docker and containerization workflows.

● Experience with testing frameworks (Jest, Mocha, etc.).

● Familiarity with CI/CD pipelines and automated deployments.

 

Tools You’ll Use

● Backend: Node.js, Express.js, TypeScript

● Frontend: React.js (preferred)

● Database: MongoDB, Mongoose

● Version Control: Git, GitHub / GitLab

● Cloud & DevOps: AWS / GCP / Azure, Docker

● Collaboration: Google Workspace, Notion, Slack

 

Key Metrics You’ll Own

● Code quality, performance, and scalability

● Timely delivery of features and releases

● System reliability and reduction in production issues

● Contribution to architectural improvements

 

Why company

● Work on impactful, product-driven tech platforms.

● High-ownership role with end-to-end engineering exposure.

● Opportunity to work with modern technologies and evolving architectures.

● Collaborative startup culture with strong learning and growth opportunities.

 

Read more
SAAS Industry

SAAS Industry

Agency job
via Peak Hire Solutions by Dhara Thakkar
Bengaluru (Bangalore)
5 - 8 yrs
₹20L - ₹25L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
RESTful APIs
NOSQL Databases
Systems design
+39 more

Job Details

Job Title: Senior Backend Engineer

Industry: SAAS

Function – Information Technology

Experience Required: 5-8 years

- Working Days: 6 days a week, (5 days-in-office, Saturdays WFH)

Employment Type: Full Time

Job Location: Bangalore

CTC Range: Best in Industry

 

Preferred Skills: AWS, NodeJS, RESTful APIs, NoSQL

 

Criteria

· Minimum 5+ years in backend engineering with strong system design expertise

· Experience building scalable systems from scratch

· Expert-level proficiency in Node.js

· Deep understanding of distributed systems

· Strong NoSQL design skills

· Hands-on AWS cloud experience

· Proven leadership and mentoring capability

· Preferred candidates from SAAS/Software/IT Services based startups or scaleup companies

 

Job Description

The Role:

What You’ll Build:

1. System Architecture & Design

● Architect highly scalable backend systems from the ground up

● Define technology choices: frameworks, databases, queues, caching layers

● Evaluate microservices vs monoliths based on product stage

● Design REST, GraphQL, and real-time WebSocket APIs

● Build event-driven systems for asynchronous processing

● Architect multi-tenant systems with strict data isolation

● Maintain architectural documentation and technical specs

2. Core Backend Services

● Build high-performance APIs for 3D content, XR experiences, analytics, and user interactions

● Create 3D asset processing pipelines for uploads, conversions, and optimization

● Develop distributed job workers for CPU/GPU-intensive tasks

● Build authentication/authorization systems (RBAC)

● Implement billing, subscription, and usage metering

● Build secure webhook systems and third-party integration APIs

● Create real-time collaboration features via WebSockets/SSE

3. Data Architecture & Databases

● Design scalable schemas for 3D metadata, XR sessions, and analytics

● Model complex product catalogs with variants and hierarchies

● Implement Redis-based caching strategies

● Build search and indexing systems (Elasticsearch/Algolia)

● Architect ETL pipelines and data warehouses

● Implement sharding, partitioning, and replication strategies

● Design backup, restore, and disaster recovery workflows

4. Scalability & Performance

● Build systems designed for 10x–100x traffic growth

● Implement load balancing, autoscaling, and distributed processing

● Optimize API response times and database performance

● Implement global CDN delivery for heavy 3D assets

● Build rate limiting, throttling, and backpressure mechanisms

● Optimize storage and retrieval of large 3D files

● Profile and improve CPU, memory, and network performance

5. Infrastructure & DevOps

● Architect AWS infrastructure (EC2, S3, Lambda, RDS, ElastiCache)

● Build CI/CD pipelines for automated deployments and rollbacks

● Use IaC tools (Terraform/CloudFormation) for infra provisioning

● Set up monitoring, logging, and alerting systems

● Use Docker + Kubernetes for container orchestration

● Implement security best practices for data, networks, and secrets

● Define disaster recovery and business continuity plans

6. Integration & APIs

● Build integrations with Shopify, WooCommerce, Magento

● Design webhook systems for real-time events

● Build SDKs, client libraries, and developer tools

● Integrate payment gateways (Stripe, Razorpay)

● Implement SSO and OAuth for enterprise customers

● Define API versioning and lifecycle/deprecation strategies

7. Data Processing & Analytics

● Build analytics pipelines for engagement, conversions, and XR performance

● Process high-volume event streams at scale

● Build data warehouses for BI and reporting

● Develop real-time dashboards and insights systems

● Implement analytics export pipelines and platform integrations

● Enable A/B testing and experimentation frameworks

● Build personalization and recommendation systems

 

Technical Stack:

1. Backend Languages & Frameworks 

●  Primary: Node.js (Express, NestJS), Python (FastAPI, Django)

●  Secondary: Go, Java/Kotlin (Spring)

●  APIs: REST, GraphQL, gRPC


2. Databases & Storage

● SQL: PostgreSQL, MySQL

● NoSQL: MongoDB, DynamoDB

● Caching: Redis, Memcached

● Search: Elasticsearch, Algolia

● Storage/CDN: AWS S3, CloudFront

● Queues: Kafka, RabbitMQ, AWS SQS

 

3. Cloud & Infrastructure: 

● Cloud: AWS (primary), GCP/Azure (nice to have)

● Compute: EC2, Lambda, ECS, EKS

● Infrastructure: Terraform, CloudFormation

● CI/CD: GitHub Actions, Jenkins, CircleCI

● Containers: Docker, Kubernetes

 

4. Monitoring & Operations 

● Monitoring: Datadog, New Relic, CloudWatch

● Logging: ELK Stack, CloudWatch Logs

● Error Tracking: Sentry, Rollbar

● APM tools

 

5. Security & Auth

● Auth: JWT, OAuth 2.0, SAML

● Secrets: AWS Secrets Manager, Vault

● Security: Encryption (at rest/in transit), TLS/SSL, IAM

 


What We’re Looking For:

1. Must-Haves

● 5+ years in backend engineering with strong system design expertise

● Experience building scalable systems from scratch

● Expert-level proficiency in at least one backend stack (Node, Python, Go, Java)

● Deep understanding of distributed systems and microservices

● Strong SQL/NoSQL design skills with performance optimization

● Hands-on AWS cloud experience

● Ability to write high-quality production code daily

● Experience building and scaling RESTful APIs

● Strong understanding of caching, sharding, horizontal scaling

● Solid security and best-practice implementation experience

● Proven leadership and mentoring capability


2. Highly Desirable

● Experience with large file processing (3D, video, images)

● Background in SaaS, multi-tenancy, or e-commerce

● Experience with real-time systems (WebSockets, streams)

● Knowledge of ML/AI infrastructure

● Experience with HA systems, DR planning

● Familiarity with GraphQL, gRPC, event-driven systems

● DevOps/infrastructure engineering background

● Experience with XR/AR/VR backend systems

● Open-source contributions or technical writing

● Prior senior technical leadership experience

 

Technical Challenges You’ll Solve:

● Designing large-scale 3D asset processing pipelines

● Serving XR content globally with ultra-low latency

● Scaling from thousands to millions of daily requests

● Efficiently handling CPU/GPU-heavy workloads

● Architecting multi-tenancy with complete data isolation

● Managing billions of analytics events at scale

● Building future-proof APIs with backward compatibility

 

Why company:

● Architectural Ownership: Build foundational systems from scratch

● Deep Technical Work: Solve distributed systems and scaling challenges

● Hands-On Impact: Design and code mission-critical infrastructure

● Diverse Problems: APIs, infra, data, ML, XR, asset processing

● Massive Scale Opportunity: Build systems for exponential growth

● Modern Stack and best practices

● Product Impact: Your architecture directly powers millions of users

● Leadership Opportunity: Shape engineering culture and direction

● Learning Environment: Stay at the forefront of backend engineering

● Backed by AWS, Microsoft, Google

 

Location & Work Culture:

● Location: Bengaluru

● Schedule: 6 days a week, (5 days-in-office, Saturdays WFH)

● Culture: Builder mindset, strong ownership, technical excellence

● Team: Small, highly skilled backend and infra team

● Resources: AWS credits, latest tooling, learning budget

 

Read more
StarApps Studio

at StarApps Studio

2 candid answers
4 products
Shivani Kawade
Posted by Shivani Kawade
Pune
4 - 8 yrs
₹15L - ₹30L / yr
skill iconReact.js
TypeScript
skill iconJavascript
skill iconHTML/CSS
skill iconRuby on Rails (ROR)
+9 more

StarApps Studio is a product-driven SaaS company building Shopify apps that power thousands of online stores. We’ve developed 6 highly-rated Shopify apps (averaging 4.9★) used by 30,000+ Shopify merchants worldwide, including over 1,000 Shopify Plus stores. In just a few years, our bootstrapped team grew from $5.5M to $10M in Annual Recurring Revenue (ARR) by obsessing over quality and merchant success. We’re a tight-knit, 20-person team based in Baner, Pune, on a mission to help e-commerce brands create world-class shopping experiences.


Role Overview

We are looking for a Full Stack Developer who will own features end-to-end with an emphasis on backend excellence. In this role, you will design and optimize complex data models and API architectures in Ruby on Rails, implement robust background job queues (e.g. delayed_job) for heavy workloads, and perform rigorous performance tuning to ensure our systems scale. On the frontend, you'll build and integrate React components to deliver complete, user-friendly features. This is a role for someone who loves tackling deep technical challenges in the backend while also crafting intuitive user interfaces – an opportunity to leverage your backend expertise while driving full-stack product ownership.


Key Responsibilities

  • Architect & Optimize Backend: Design scalable database schemas and efficient data models. Develop high-performance RESTful APIs and services in Ruby on Rails, ensuring clean, maintainable code and great performance.
  • Backend API Development: Design, implement, and maintain robust backend services and RESTful APIs in Ruby on Rails to support new features and internal tools.
  • End-to-End Performance Tuning: Optimize application performance across the stack – from minimizing frontend load times to improving backend query efficiency – for our high-traffic, data-intensive apps.
  • Collaboration & Agile Delivery: Work closely with designers, product managers, and QA to translate requirements into technical solutions. Participate in sprint planning, code reviews, and daily deployments to ship features continuously and reliably.
  • Quality & Maintenance: Write clean, maintainable code with appropriate test coverage (unit and integration tests) to ensure reliability. Monitor, debug, and resolve issues in production, and continually refactor and improve existing code for stability and performance


What We’re Looking For (Requirements)

  • 4–8 Years Experience: Proven experience as a software developer in a product company (experience in e-commerce or SaaS is highly preferred). You have built real products used by actual customers at scale.
  • Ruby on Rails Expertise: Strong command of Ruby on Rails. Experience designing RESTful APIs, working with MVC architecture, and using Rails best practices. You should understand how to structure large Rails applications for maintainability.
  • Backend Proficiency: Comfortable building server-side applications and APIs with Ruby on Rails. You can implement business logic, integrate with databases, and create RESTful endpoints (bonus if you’ve worked with GraphQL or other backend frameworks).
  • Database Skills: Proficiency with PostgreSQL (or similar RDBMS). Capable of writing complex SQL queries, optimizing queries/indexes, and designing efficient relational schemas. Familiarity with Redis or caching strategies is a plus.
  • Front-End Proficiency: Comfortable building user interfaces with React and modern JavaScript/TypeScript. Able to implement frontend components that consume APIs and provide a smooth user experience.
  • System Design & Quality: Solid understanding of web application architecture, performance tuning, and scalability concerns. Experience with profiling, benchmarking, and optimizing web applications. Commitment to writing clean, maintainable code with proper tests.
  • Product Mindset: You care about the why behind the features. You are comfortable digging into requirements, questioning assumptions, and ensuring that we build solutions that truly solve merchant problems.
  • Adaptability & Collaboration: Excellent problem-solving skills, communication, and ability to work in a fast-paced, collaborative environment. You are a self-starter who can take ownership of tasks and drive them to completion, but also know when to ask for help.


Tech Stack

  • Frontend: React, TypeScript/JavaScript, HTML5, CSS3 (Tailwind/Bootstrap), modern build tools (Webpack, Babel).
  • Backend: Ruby on Rails (REST APIs, background jobs), some services in Python.
  • Database: PostgreSQL.
  • Cloud & DevOps: Amazon Web Services (EC2, S3, RDS, CloudFront), Docker, CI/CD for daily deployments.
  • Tools: Git (GitHub), Agile issue tracking (JIRA/Trello), and a keen use of automated testing.


(Don’t worry if you aren’t familiar with every item – we value willingness to learn. This is our current stack, and we continually adopt new technologies that improve our products.)


Why Join Us

  • High Impact & Ownership: Your work will directly enhance the shopping experience of 50M+ shoppers daily. At StarApps, developers deploy code daily and see the immediate impact on thousands of merchants – you’ll own projects end-to-end and build features that matter.
  • Fast-Growing, Profitable Startup: Join a bootstrapped, profitable company on an exciting growth trajectory (from $4M to $10M ARR). There’s no bureaucracy here – just a passionate team obsessed with product quality and merchant happiness. You’ll be part of our core team as we scale, with ample opportunities to grow into leadership roles.
  • Cutting-Edge Tech & Challenges: Work with modern technologies (React, Rails, AWS) on performance-intensive applications. Tackle complex challenges in scaling, optimization, and UX for a global user base – continuously sharpen your skills in a supportive, learning-focused environment.
  • Collaborative Culture: We are a small 25-person team that operates like a close-knit family. You’ll work side by side with experienced founders and a talented team that values innovation, humility, and continuous improvement. Our culture is open, empathetic, and growth-oriented – every voice is heard, and every team member plays a crucial role in our success.


Growth & Benefits: We invest in our team’s growth. Expect a competitive salary, performance bonuses, and whatever tools you need to do your best work. We sponsor professional development (courses, conferences, books) and encourage knowledge-sharing. You’ll enjoy a flexible leave policy, team off-sites, and the excitement of building a global product from our new office in Baner, Pune.

Read more
Euphoric Thought Technologies
Bengaluru (Bangalore)
4 - 5 yrs
₹8L - ₹14L / yr
skill iconJava
skill iconSpring Boot
MySQL
skill iconAmazon Web Services (AWS)
skill iconDjango
+3 more

About Us

Euphoric Thought Technologies is a fast-growing technology company focused on delivering scalable, high-performance digital solutions. We are looking for a skilled Backend Developer to join our dynamic team and contribute to building robust and efficient systems.


Key Responsibilities

 Design, develop, and maintain scalable backend services and APIs

 Write clean, maintainable, and efficient code

 Collaborate with frontend developers, DevOps, and product teams

 Optimize applications for maximum speed and scalability

 Troubleshoot, debug, and upgrade existing systems

 Implement security and data protection best practices

 Participate in code reviews and technical discussions


Required Skills & Qualifications

 4–5 years of hands-on experience in backend development

 Strong proficiency in at least one backend language such as Java and Core Java

 Experience with frameworks like Spring Boot, Django, Express.js, etc.

 Good understanding of RESTful APIs and Microservices architecture

 Strong experience with databases (MySQL, PostgreSQL, MongoDB)

 Familiarity with version control systems (Git)

 Experience with cloud platforms (AWS/Azure/GCP) is a plus

 Knowledge of Docker, Kubernetes, CI/CD pipelines is an added advantage

 Strong problem-solving and analytical skills

Read more
ARDEM Incorporated
Isha Ashwini
Posted by Isha Ashwini
Remote only
0 - 0 yrs
₹1.5L - ₹1.8L / yr
IT operations
Network
Help desk
skill iconAmazon Web Services (AWS)
Microsoft Windows Azure
+2 more

📍 Position: IT Intern (Only candidates from BTech-IT background will be considered)

👩‍💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)

🎓 Qualification: B.Tech (IT) / M.Tech (IT) only

📌 Mode: Remote (WFH)

⏳ Shift: Willingness to work in night/rotational shifts

🗣 Communication: Excellent English


𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.

- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).

- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.

- Help create and update technical documentation and knowledge base articles.

- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.


💻 Technical Requirements:

- Laptop with i5 or higher processor

-Reliable internet connectivity with 100mbps speed

Read more
ARDEM Incorporated
Remote only
0 - 1 yrs
₹1L - ₹1.8L / yr
skill iconAmazon Web Services (AWS)
Amazon EC2
Windows Azure
Troubleshooting
IAM
+2 more

Position Title: IT Intern (Full Time)

Department: Information Technology

Work Mode: Work From Home (WFH)

Educational Qualification: B.Tech (IT) / M.Tech (IT)

Shift: Rotational Shifts (6am to 3pm, 2pm to 11pm, and 10pm to 07am)

---

Role Summary

The IT Intern will support day-to-day IT operations and assist in maintaining the organization’s IT infrastructure. This role provides structured exposure to desktop support, cloud platforms, user management, server infrastructure, and IT security practices under the guidance of senior IT team members.

---

Key Responsibilities

· Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

· Support user account management activities using Azure Entra ID (Azure Active Directory), Active Directory, and Microsoft 365.

· Assist the IT team in configuring, monitoring, and supporting AWS cloud services, including EC2, S3, IAM, and WorkSpaces.

· Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

· Assist with data backups, basic disaster recovery tasks, and implementation of security procedures in line with company policies.

· Create, update, and maintain technical documentation, SOPs, and knowledge base articles.

· Collaborate with internal teams to support system upgrades, IT infrastructure improvements, and ongoing IT projects.

· Adhere to company IT policies, data security standards, and confidentiality requirements.

---

Required Skills & Competencies

· Basic understanding of IT infrastructure, networking concepts, and operating systems

· Familiarity with cloud platforms such as AWS and/or Microsoft Azure

· Fundamental knowledge of Active Directory and user access management

· Strong willingness to learn and adapt to new technologies

· Good analytical, problem-solving, and communication skills

· Ability to work independently in a remote environment

---

Technical Requirements

· Personal laptop/desktop with required specifications

· Reliable internet connectivity to support remote work

---

Learning & Development Opportunities

· Hands-on exposure to enterprise IT environments

· Practical experience with cloud technologies and infrastructure support

· Mentorship from experienced IT professionals

· Opportunity to develop technical, documentation, and operational skills


About ARDEM

ARDEM is a leading Business Process Outsourcing (BPO) and Business Process Automation (BPA) service provider. With over 20 years of experience, ARDEM has consistently delivered high-quality outsourcing and automation services to clients across the USA and Canada. We are growing rapidly and continuously innovating to improve our services. Our goal is to strive for excellence and become the best Business Process Outsourcing and Business Process Automation company for our customers.

Read more
Cambridge Wealth (Baker Street Fintech)
Sangeeta Bhagwat
Posted by Sangeeta Bhagwat
Pune
3 - 5 yrs
₹10L - ₹12L / yr
skill iconPython
SQL
skill iconAmazon Web Services (AWS)
skill iconPostgreSQL
pandas
+9 more


Department

Product & Technology

Location

On-site | Prabhat Road, Pune

Experience

3-5 Years in a Data Engineering or Analytics Role

Domain

Fintech / Wealth Management — non-negotiable

Compensation

11-12 LPA Fixed + Performance Bonus

Growth

Title upgrade + salary revision at 12–18 months for strong performers


Why this role is different from most Data Engineer postings

You will work directly with the founding team on a live wealth management platform used by HNI and NRI clients. You will not spend years in a queue waiting to matter your work ships to production, your analysis influences product decisions, and you will guide junior teammates from day one. If you perform, a raise and title upgrade are on the table within 1218 months. This is the kind of early-team role that defines careers.


About Cambridge Wealth

Cambridge Wealth is a fast-growing, award-winning Financial Services and Fintech firm obsessed with quality and exceptional client service. We serve a high-profile clientele NRI, Mass Affluent, HNI, and ultra-HNI professionals and have received multiple awards from major Mutual Fund houses and BSE. We are past the zero-to-one stage and now focused on scaling our features and intelligence layer. You will be joining at exactly the right time.


What You Will Be Doing

This is a central, hands-on data engineering role at the intersection of financial analytics and applied ML. You will own the data pipelines and analytical models that power investment insights for wealth management clients transforming transaction data and portfolio information into measurable, actionable intelligence.

We are not looking for someone who just keeps the lights on. We want someone who looks at a working system and immediately sees how to make it 10x faster, cleaner, and smarter using AI and automation wherever possible.


Key Responsibilities:


Data Engineering & Pipelines

  • Build and optimize PostgreSQL-based pipelines to process large volumes of investment transaction data.
  • Design and maintain database schemas, foreign tables, and analytical structures for performance at scale.
  • Write advanced SQL — window functions, stored procedures, query optimization, index design.
  • Build Python automation scripts for data ingestion, transformation, and scheduled pipeline runs.
  • Monitor AWS RDS workloads and troubleshoot performance issues proactively.


Financial Analytics & Modelling

  • Develop analytical frameworks to evaluate client portfolios against benchmarks and category averages.
  • Build data models covering mutual fund schemes, SIPs, redemptions, switches, and transfer lifecycles.
  • Create materialized views and derived tables optimized for dashboards and internal reporting tools.
  • Analyse client transaction history to surface patterns in investment behaviour and financial discipline.


Applied ML & AI-Driven Development

  • Use Python (Pandas, NumPy, Scikit-learn) for trend analysis, forecasting, and predictive modelling.
  • Implement classification or regression models to support financial pattern detection.
  • Use AI tools — LLMs, Copilots — to accelerate ETL development, code quality, and data cleaning.
  • Identify opportunities to automate repetitive data tasks and advocate for smarter tooling.


Data Quality & Governance

  • Own data integrity end-to-end in a live, high-stakes financial environment.
  • Build and maintain validation and cleaning protocols across all financial datasets.
  • Maintain Excel models, Power Query workflows, and structured reporting outputs.


Collaboration & Junior Mentorship

  • Work directly with Product, Investment Research, and Wealth Advisory teams.
  • Translate open-ended business questions into structured queries and measurable outputs.
  • Guide 1–2 junior trainees — review their work, set code quality standards, and help them grow.
  • Present findings clearly to non-technical stakeholders — no jargon, just clarity.


Skills — What We Need vs. What Helps

Skill / Tool

Requirement


Must-Haves:

SQL & PostgreSQL (window functions, stored procedures, optimization)

Python — Pandas, NumPy for data processing and automation

ML fundamentals — classification or regression (Scikit-learn)

AWS RDS or equivalent cloud database experience

Financial domain knowledge — mutual funds, SIPs, portfolio concepts

Python data visualization — Matplotlib, Seaborn, or Plotly

Strong Advantage

Excel — Power Query, advanced modelling

Materialized views, query planning, index optimization

Experience with BI/dashboard tools

Good to Have

NoSQL databases

Prior fintech or wealth management startup experience


Financial Domain — Non-Negotiable

This is a wealth management platform. You must come in with a working understanding of:

  • Mutual fund structures, scheme types, and NAV-based transactions
  • Investment lifecycle — SIPs, Lump Sum, Redemptions, Switches, and STPs
  • Portfolio allocation and benchmarking against indices (e.g. Nifty 50, category averages)
  • How HNI/NRI clients interact with financial products differently from retail investors

You do not need to be a CFA. But if mutual funds and portfolio analytics are completely new territory, this role is not the right fit right now.


The Culture Fit — Read This Carefully

We are a small, fast-moving team. This is not a place where you wait for a ticket to arrive in your queue. The right person for this role:

  • Has worked at a small startup before and is used to wearing multiple hats
  • Finds broken or slow data systems genuinely irritating and fixes them without being asked
  • Reaches for Python or an LLM when there is a repetitive task — automating is instinctive
  • Is comfortable saying 'I don't know but I'll find out' and follows through independently
  • Wants visibility and ownership, not just a well-defined job description
  • Is looking for a role where strong performance is directly visible and rewarded


Growth Path — What Happens If You Perform

This is not a vague 'growth opportunity' pitch.

If you hit the bar in your first 12–18 months, you will receive a salary revision and a title upgrade to Senior Data Engineer or Lead Data Engineer depending on team expansion. As we scale our Data and AI team, this role is the natural stepping stone to a team lead position. You will also gain direct exposure to founding-team decision-making — the kind of access that is hard to get at larger companies.


Preferred Background

  • 2–4 years in a data engineering or analytics role at a startup or small Fintech
  • Experience in a live product environment where data errors have real consequences
  • Exposure to portfolio analytics, investment research, or wealth management platforms
  • Has mentored or reviewed code for at least one junior team member


Hiring Process

We respect your time. The process is direct and moves fast.

  • Screening Questions — 5 minutes online
  • Online Challenge — MCQ(Data, SQL, AWS, etc), and one applied ML or analytics problem, Communication Skills and Personality (focused, not trick questions)
  • People Round — 30-minute video call, culture and communication
  • Technical Deep-Dive — 1 hour in person, live financial data problems and your past work
  • Founder's Interview — 1 hour in person, growth conversation and mutual fit
  • Offer & Background Verification


Read more
CipherSonic Labs
Remote only
7 - 10 yrs
Best in industry
skill iconC++
skill iconC
skill iconAmazon Web Services (AWS)
skill iconPython

 

Job Title: Software Developer (Contractor)

Location: Remote, Up to 1-year contract

Compensation: Hourly

About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.

Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.

Key Responsibilities:

·     Develop and optimize software using C/C++ for high-performance computing applications.

·     Work on cryptographic algorithm implementations and performance tuning.

·     Optimize memory management, threading, and parallel computing techniques.

·     Debug, profile, and test software for performance and reliability.

·     Write clean, efficient, and well-documented code.

Qualifications:

·     Completed a B.S. or higher degree in Computer Science, Computer Engineering.

·     Strong programming skills in C and C++.

·     Familiarity with Linux-based development environments.

·     Basic understanding of cryptographic algorithms and security principles is a plus.

·     Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.

·     Knowledge of other programming languages such as Python, Rust, or Go is a plus.

·     Strong problem-solving skills and attention to detail.

·     Ability to work independently and collaboratively in a fast-paced startup environment.

What You’ll Gain:

·     Hands-on experience in systems programming, cryptography, and high-performance computing.

·     Opportunities to work on real-world security and privacy-focused projects.

·     Mentorship from experienced software engineers and researchers.

·     Exposure to cutting-edge cryptographic acceleration and secure computing techniques.

·     Potential for future full-time employment based on performance.

Read more
AEGION- A Legion of Agents

at AEGION- A Legion of Agents

7 candid answers
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
5 - 8 yrs
Upto ₹80L / yr (Varies
)
skill iconPython
FastAPI
skill iconNodeJS (Node.js)
TypeScript
skill iconReact.js
+4 more

We're looking for an experienced Full-Stack Engineer who can architect and build AI-powered agent systems from the ground up. You'll work across the entire stack—from designing scalable backend services and LLM orchestration pipelines to creating frontend interfaces for agent interactions through widgets, bots, plugins, and browser extensions.


You should be fluent in modern backend technologies, AI/LLM integration patterns, and frontend development, with strong systems design thinking and the ability to navigate the complexities of building reliable AI applications.


Note: This is an on-site, 6-day-a-week role. We are in a critical product development phase where sthe peed of iteration directly determines market success. At this early stage, speed of execution and clarity of thought are our strongest moats, and we are doubling down on both as we build through our 0→1 journey.


WHAT YOU BRING:

You take ownership of complex technical challenges end to end, from system architecture to deployment, and thrive in a lean team where every person is a builder. You maintain a strong bias for action, moving quickly to prototype and validate AI agent capabilities while building production-grade systems. You consistently deliver reliable, scalable solutions that leverage AI effectively — whether it's designing robust prompt chains, implementing RAG systems, building conversational interfaces, or creating seamless browser extensions.

You earn trust through technical depth, reliable execution, and the ability to bridge AI capabilities with practical business needs. Above all, you are obsessed with building intelligent systems that actually work. You think deeply about system reliability, performance, cost optimization, and you're motivated by creating AI experiences that deliver real value to our enterprise customers.


WHAT YOU WILL DO:

Your primary responsibility (95% of your time) will be designing and building AI agent systems across the full stack. Specifically, you will:

  • Architect and implement scalable backend services for AI agent orchestration, including LLM integration, prompt management, context handling, and conversation state management.
  • Design and build robust AI pipelines — implementing RAG systems, agent workflows, tool calling, and chain-of-thought reasoning patterns.
  • Develop frontend interfaces for AI interactions including embeddable widgets, Chrome extensions, chat interfaces, and integration plugins for third-party platforms.
  • Optimize LLM operations — managing token usage, implementing caching strategies, handling rate limits, and building evaluation frameworks for agent performance.
  • Build observability and monitoring systems for AI agents, including prompt versioning, conversation analytics, and quality assurance pipelines.
  • Collaborate on system design decisions around AI infrastructure, model selection, vector databases, and real-time agent capabilities.
  • Stay current with AI/LLM developments and pragmatically adopt new techniques (function calling, multi-agent systems, advanced prompting strategies) where they add value.


BASIC QUALIFICATIONS:

  • 4–6 years of full-stack development experience, with at least 1 year working with LLMs and AI systems.
  • Strong backend engineering skills: proficiency in Node.js, Python, or similar; experience with API design, database systems, and distributed architectures.
  • Hands-on AI/LLM experience: prompt engineering, working with OpenAI/Anthropic/Google APIs, implementing RAG, managing context windows, and optimizing for latency/cost.
  • Frontend development capabilities: JavaScript/TypeScript, React or Vue, browser extension development, and building embeddable widgets.
  • Systems design thinking: ability to architect scalable, fault-tolerant systems that handle the unique challenges of AI applications (non-determinism, latency, cost).
  • Experience with AI operations: prompt versioning, A/B testing for prompts, monitoring agent behavior, and implementing guardrails.
  • Understanding of vector databases, embedding models, and semantic search implementations.
  • Comfortable working in fast-moving, startup-style environments with high ownership.


PREFERRED QUALIFICATIONS:

  • Experience with advanced LLM techniques: fine-tuning, function calling, agent frameworks (LangChain, LlamaIndex, AutoGPT patterns).
  • Familiarity with ML ops tools and practices for production AI systems.
  • Prior work on conversational AI, chatbots, or virtual assistants at scale.
  • Experience with real-time systems, WebSockets, and streaming responses.
  • Knowledge of browser automation, web scraping, or RPA technologies.
  • Experience with multi-tenant SaaS architectures and enterprise security requirements.
  • Contributions to open-source AI/LLM projects or published work in the field.


WHAT WE OFFER:

  • Competitive salary + meaningful equity.
  • High ownership and the opportunity to shape product direction.
  • Direct impact on cutting-edge AI product development.
  • A collaborative team that values clarity, autonomy, and velocity.
Read more
CipherSonic Labs
Ajay Joshi
Posted by Ajay Joshi
Remote only
3 - 5 yrs
₹20L - ₹30L / yr
skill iconC++
skill iconC
Linux/Unix
skill iconAmazon Web Services (AWS)
skill iconPython
+2 more

 

Job Title: Software Developer

Location: Remote

About Us: CipherSonic Labs is a cutting-edge technology company specializing in data security and privacy solutions for enterprises processing sensitive data in the cloud. We develop high-performance cryptographic software and hardware acceleration techniques to enable secure computing. Our team is looking for talented individuals to contribute to innovative projects in secure computing and high-performance software development.

Job Description: We are seeking a Software Developer to assist in the development of high-performance software solutions. This role will involve working on low-level programming, optimizing cryptographic algorithms, and improving performance for security-critical applications. The ideal candidate will have a passion for systems programming, algorithm optimization, and working in a high-performance computing environment.

Key Responsibilities:

·     Develop and optimize software using C/C++ for high-performance computing applications.

·     Work on cryptographic algorithm implementations and performance tuning.

·     Optimize memory management, threading, and parallel computing techniques.

·     Debug, profile, and test software for performance and reliability.

·     Write clean, efficient, and well-documented code.

Qualifications:

·     Completed a B.S. or higher degree in Computer Science, Computer Engineering.

·     Strong programming skills in C and C++.

·     Familiarity with Linux-based development environments.

·     Basic understanding of cryptographic algorithms and security principles is a plus.

·     Experience with AWS Lambda, EC2, S3, DynamoDB, API Gateway, Containerization (like Docker, Kubernetes) is a plus.

·     Knowledge of other programming languages such as Python, Rust, or Go is a plus.

·     Strong problem-solving skills and attention to detail.

·     Ability to work independently and collaboratively in a fast-paced startup environment.

What You’ll Gain:

·     Hands-on experience in systems programming, cryptography, and high-performance computing.

·     Opportunities to work on real-world security and privacy-focused projects.

·     Mentorship from experienced software engineers and researchers.

·     Exposure to cutting-edge cryptographic acceleration and secure computing techniques.

·     Potential for future full-time employment based on performance.

Read more
AI Recruiting Platform

AI Recruiting Platform

Agency job
via Peak Hire Solutions by Dhara Thakkar
Remote only
1 - 15 yrs
₹70L - ₹99L / yr
MySQL
skill iconPython
Microservices
API
skill iconJava
+18 more

Description

Join company as a Backend Developer and become a pivotal force in building the robust, scalable services that power our innovative platforms. In this role, you will design, develop, and maintain server‑side applications, ensuring high performance and reliability for millions of users. You’ll collaborate closely with cross‑functional product, front‑end, and DevOps teams to translate business requirements into clean, efficient code, while participating in code reviews and architectural discussions. Our dynamic environment encourages continuous learning, offering opportunities to work with cutting‑edge technologies, cloud infrastructures, and modern development practices. As a key contributor, your work will directly impact product quality, user satisfaction, and the overall success of company's mission to streamline hiring solutions.


Requirements:

  • 1–15 years of professional experience in backend development, with a strong focus on building APIs and microservices.
  • Proficiency in server‑side languages such as Python, Java, Node.js, or Go, and solid understanding of object‑oriented and functional programming paradigms.
  • Extensive experience with relational (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, Redis), including schema design and query optimization.
  • Familiarity with cloud platforms (AWS, GCP, Azure) and containerization technologies like Docker and Kubernetes.
  • Hands‑on experience with version control (Git), CI/CD pipelines, and automated testing frameworks.
  • Strong problem‑solving abilities, effective communication skills, and a collaborative mindset for working within multidisciplinary teams.


Roles and Responsibilities:

  • Design, develop, and maintain high‑throughput backend services and RESTful APIs that support core product features.
  • Implement data models and storage solutions, ensuring data integrity, security, and optimal performance.
  • Collaborate with front‑end engineers, product managers, and designers to define technical requirements and deliver end‑to‑end solutions.
  • Participate in code reviews, provide constructive feedback, and uphold coding standards and best practices.
  • Monitor, troubleshoot, and optimize production systems, implementing robust logging, alerting, and performance tuning.
  • Contribute to the continuous improvement of development workflows, including CI/CD automation, testing strategies, and deployment processes.
  • Stay current with emerging technologies and industry trends, proposing innovative approaches to enhance system architecture.


Budget:

  • Job Type: payroll
  • Experience Range: 1–15 years


Read more
Remote only
0 - 0 yrs
₹1L - ₹1.5L / yr
skill iconAmazon Web Services (AWS)
Cyber Security
IT infrastructure
IT security
AWS CloudFormation
+11 more

📍 Position: IT Intern

👩‍💻 Experience: 0–6 Months (Freshers/Recent graduates can apply)

🎓 Qualification: B.Tech (IT) / M.Tech (IT) only

📌 Mode: Remote (WFH)

⏳ Shift: Willingness to work in night/rotational shifts

🗣 Communication: Excellent English


𝐊𝐞𝐲 𝐑𝐞𝐬𝐩𝐨𝐧𝐬𝐢𝐛𝐢𝐥𝐢𝐭𝐢𝐞𝐬:

- Assist in troubleshooting and resolving basic desktop, software, hardware, and network-related issues under supervision.

- Support user account management activities using #Azure Entra ID (Azure AD), Active Directory, and Microsoft 365.

- Assist the IT team in configuring, monitoring, and supporting #AWS cloud services (EC2, S3, IAM, WorkSpaces).

- Support maintenance and monitoring of on-premises server infrastructure, internal applications, and email services.

- Assist with backups, basic disaster recovery tasks, and security procedures as per company policies.

- Help create and update technical documentation and knowledge base articles.

- Work closely with internal teams and assist in system upgrades, IT infrastructure improvements, and ongoing projects.


💻 Technical Requirements:

- Laptop with i5 or higher processor

-Reliable internet connectivity with 100mbps speed

Read more
Deltek
Harsha Mehrotra
Posted by Harsha Mehrotra
Remote only
4 - 7 yrs
Best in industry
Artificial Intelligence (AI)
skill icon.NET
skill iconReact.js
Microsoft Windows Azure
ASP.NET
+5 more

Position Responsibilities:

  • Collaborate with the development team to maintain, enhance, and scale the product for enterprise use.
  • Design and develop scalable, high-performance solutions using cloud technologies and containerization.
  • Contribute to all phases of the development lifecycle, following SOLID principles and best practices.
  • Write well-designed, testable, and efficient code with a strong emphasis on Test-Driven Development (TDD), ensuring comprehensive unit, integration, and performance testing.
  • Ensure software designs comply with specifications and security best practices.
  • Recommend changes to improve application architecture, maintainability, and performance.
  • Develop and optimize database queries using T-SQL.
  • Prepare and produce software component releases.
  • Develop and execute unit, integration, and performance tests.
  • Support formal testing cycles and resolve test defects.

AI-Specific Responsibilities:

  • Integrate AI-powered tools and frameworks to enhance code quality and development efficiency.
  • Utilize AI-driven analytics to identify performance bottlenecks and optimize system performance.
  • Implement AI-based security measures to proactively detect and mitigate potential threats.
  • Leverage AI for automated testing and continuous integration/continuous deployment (CI/CD) processes.
  • Guide the adoption and effective use of AI agents for automating repetitive development, deployment, and testing processes within the engineering team.


Qualifications:

  • Bachelor’s degree in Computer Science, IT, or a related field.
  • Highly proficient in ASP.NET Core (C#) and full-stack development.
  • Experience developing REST APIs.
  • Proficiency in front-end technologies (JavaScript, HTML, CSS, Bootstrap, and UI frameworks).
  • Strong database experience, particularly with T-SQL and relational database design.
  • Advanced understanding of object-oriented programming (OOP) and SOLID principles.
  • Experience with security best practices in web and API development.
  • Knowledge of Agile SCRUM methodology and experience in collaborative environments.
  • Experience with Test-Driven Development (TDD).
  • Strong analytical skills, problem-solving abilities, and curiosity to explore new technologies.
  • Ability to communicate effectively, including explaining technical concepts to non-technical stakeholders.
  • High commitment to continuous learning, innovation, and improvement.

AI-Specific Qualifications:

  • Proficiency in AI-driven development tools and platforms such as GitHub Copilot in Agentic Mode.
  • Knowledge of AI-based security protocols and threat detection systems.
  • Experience integrating GenAI or Agentic AI agents into full-stack workflows (e.g., using AI for code reviews, automated bug fixes, or system monitoring).
  • Demonstrated proficiency with AI-assisted development tools and prompt engineering for code generation, testing, or documentation.
Read more
MNC with 5000+ employees

MNC with 5000+ employees

Agency job
via True tech professionals by Saffan Shaikh
Gurugram
6 - 12 yrs
₹15L - ₹28L / yr
skill iconPython
Large Language Models (LLM)
skill iconAmazon Web Services (AWS)
FastAPI

Backend Engineer III – Senior Python Developer (LLM & AI)

Location: Gurgaon, India

Positions: 1

Experience: 6 to 9 Years

Gurgaon Hybrid

About the Role

We are seeking an experienced Backend Engineer III / Senior Python Developer to join our AI engineering team and play a critical role in building scalable, secure, and high-performance backend platforms for LLM and AI-driven applications. You will work as a hands-on individual contributor while collaborating closely with Machine Learning Engineers, Data Scientists, Product Managers, and Cloud/DevOps teams to deliver innovative, production-grade AI solutions.

Key Responsibilities

  • Design, develop, and maintain scalable backend systems and services using Python to support LLM and AI-based applications
  • Build and maintain RESTful APIs and microservices that serve machine learning models and AI components
  • Write clean, modular, efficient, and testable code following industry best practices and coding standards
  • Participate actively in code reviews, ensuring high quality, security, and maintainability of the codebase
  • Debug, profile, and optimize applications to improve performance, reliability, and scalability
  • Identify and resolve performance bottlenecks in AI/ML pipelines and backend services
  • Collaborate with ML engineers, data scientists, and product teams to translate business and technical requirements into robust backend solutions
  • Mentor and support junior developers, promoting a culture of technical excellence and continuous learning
  • Design and implement CI/CD pipelines and automate deployment workflows to ensure consistent and reliable releases
  • Stay up to date with emerging trends in Python, cloud-native development, and LLM/AI engineering practices and apply them to improve systems and processes

Required Skills & Experience

  • 6 to 9 years of strong hands-on experience in Python development
  • Solid understanding of Python software design, architecture patterns, and testing best practices
  • Proven experience working on AI, Machine Learning, or LLM-based projects
  • Strong experience in building and consuming RESTful APIs and microservices architectures
  • Hands-on experience with FastAPI, Flask, or similar model-serving frameworks
  • Strong debugging, performance profiling, and optimization skills
  • Experience with CI/CD tools and workflows (e.g., GitHub Actions, Azure DevOps, Jenkins, etc.)
  • Working knowledge of Docker and Kubernetes is a strong plus
  • Excellent analytical, problem-solving, and communication skills
  • Ability to work independently in a fast-paced, evolving AI/ML environment while mentoring junior team members

Education & Certifications

  • Bachelor’s degree in Computer Science, Software Engineering, or a related technical field
  • AWS or other relevant cloud certifications are preferred but not mandatory

Why Join Us?

  • Work on cutting-edge AI and LLM platforms
  • Collaborate with top-tier engineering and data science teams
  • Opportunity to influence system architecture and technical direction
  • Competitive compensation and career growth opportunities
Read more
MNC with 5000+ employees

MNC with 5000+ employees

Agency job
via True tech professionals by Saffan Shaikh
Gurugram
9 - 18 yrs
₹30L - ₹70L / yr
skill iconAmazon Web Services (AWS)
skill iconNodeJS (Node.js)
skill iconReact.js
Cloud Computing
Architecture
+2 more

Job Title: Principal Architect / Scalability Lead (AWS)

📍 Location: Gurgaon (Hybrid)


🕒 Employment Type: Full-Time

Role Overview

We are seeking a Principal Architect / Scalability Lead with deep expertise in AWS and large-scale distributed systems to architect and scale cloud-native products from MVP to enterprise scale.

This role demands a senior technical leader who has proven experience designing systems that handle high throughput, large concurrent workloads, and enterprise-grade reliability, while ensuring exceptional end-user experience.

You will work closely with Product, Data Engineering, AI/ML, and Backend teams to define architecture standards, scalability roadmaps, and engineering best practices.

Key Responsibilities

🏗 Architecture & Scalability Leadership

  • Architect highly scalable, resilient, and high-performance cloud-native systems on AWS.
  • Design distributed systems capable of supporting 100K+ concurrent users.
  • Lead architecture evolution from MVP to enterprise-grade deployment.
  • Translate business and consumer requirements into robust technical architecture.
  • Drive scalability planning, capacity modeling, and performance engineering.

🔄 End-to-End Ownership

  • Own full SDLC visibility from discovery and design to release, monitoring, and optimization.
  • Establish best practices for:
  • Microservices architecture
  • Distributed systems design
  • Observability & monitoring
  • DevSecOps & CI/CD
  • Ensure system uptime, fault tolerance, and cost efficiency.

☁ AWS Cloud & Infrastructure

  • Design and implement scalable systems using AWS services.
  • Lead containerization and orchestration using Docker and Kubernetes (EKS).
  • Architect secure, automated CI/CD pipelines.
  • Drive cloud cost optimization and infrastructure efficiency.

📈 Performance & Reliability Engineering

  • Define and enforce SLAs, SLOs, and reliability metrics.
  • Lead performance testing, load testing, and scalability validation.
  • Implement monitoring, alerting, and observability frameworks.
  • Design fault-tolerant and highly available systems.

🧠 Backend, Data & AI Collaboration

  • Provide architectural guidance for:
  • Backend services using Node.js and Python
  • Frontend platforms using React / Next.js
  • Data platforms using Snowflake
  • Collaborate with Data Engineering and AI/ML teams on data-intensive and AI-driven systems.
  • Design architectures supporting asynchronous processing, caching, and event-driven workflows.

👥 Leadership & Governance

  • Mentor senior engineers and guide architecture best practices.
  • Lead architecture governance and design reviews.
  • Influence senior stakeholders with data-driven technical decisions.
  • Drive cross-functional alignment across Product, Engineering, Data, and AI teams.

Required Qualifications

  • 8–15 years of experience in software engineering.
  • Proven experience scaling distributed systems handling 100K+ users or high-throughput workloads.
  • Deep hands-on expertise in AWS cloud architecture.
  • Strong experience with Docker, Kubernetes, and container orchestration.
  • Expertise in microservices, caching strategies, asynchronous processing, and distributed systems.
  • Strong understanding of performance engineering and reliability frameworks.
  • Experience building enterprise-grade systems for large-scale organizations.

Preferred Skills

  • Experience with event-driven architectures (Kafka, SQS, SNS, etc.).
  • Knowledge of database scalability and data warehousing (Snowflake).
  • Exposure to Data Engineering and AI/ML platforms.
  • Strong stakeholder communication and strategic thinking skil


Read more
VDart
Remote only
7 - 15 yrs
₹15L - ₹20L / yr
Test Automation (QA)
SaaS
skill iconMachine Learning (ML)
Artificial Intelligence (AI)
Large Language Models (LLM)
+7 more

Senior Quality Engineer – AI Products

Fulltime

Remote

Requirements

● 3-7 years of experience in software quality engineering, preferably in SaaS environments with a platform or infrastructure focus.

● Strong demonstrated experience testing distributed systems, APIs, data pipelines, or cloud-based infrastructure.

● Experience designing and executing test plans for AI/ML systems, data pipelines, or shared platform services.

● Familiarity with AI/LLM infrastructure concepts such as retrieval-augmented generation (RAG), vector search, model routing, and observability.

● Strong demonstrated proficiency in Linux distributions and CLI-based testing, including log file analysis and other troubleshooting tasks.

● Experience with AWS or other major cloud platforms.

● Basic Python/Shell scripting knowledge with ability to edit existing scripts and create new automation for pipeline validation.

● Advanced skills with API and SQL testing methodologies.

● Familiarity with test management tools such as TestRail; experience with Qase is a plus.

● Demonstrated experience leveraging Version Control Systems with a focus on GitHub.

● Experience with testing tools: Jira, Sentry, DataDog.

● Strong understanding of Agile/Scrum methodologies.

● Proven track record of mentoring junior engineers and contributing to process improvements.

● Excellent analytical and problem-solving abilities.

● Strong communication skills with ability to present to both technical and non-technical stakeholders.

● Proficiency in English (C1-C2 level).

● Most importantly: The courage to be vocal about quality concerns, platform risks, and testing impediments.

 

Preferred Qualifications

● Experience with AI/ML evaluation frameworks or tools (e.g., LLM-as-judge, Ragas, custom eval harnesses).

● Hands-on experience with document parsing, OCR, or unstructured data pipelines.

● Experience with observability tooling (e.g., Datadog, Grafana, OpenTelemetry) from a QA perspective.

● Experience testing SaaS products in regulated industries (such as PCI-compliant).

● Basic understanding of containerization, Kubernetes, and CI/CD pipelines (Jenkins, CircleCI).

● Experience with microservice architectures and distributed systems.

● Knowledge of basic non-functional testing (security, performance) with emphasis on AI-specific concerns.

● Background in security or compliance testing for AI systems.

● Certifications such as ISTQB or CSTE.

● Experience working in legal technology, fintech, or professional services software.

● Familiarity with AI-assisted testing tools and leveraging LLMs as a productivity-boosting tool.

● Experience evaluating and implementing new QE tools and processes

 

Read more
Vintronics Consulting
Remote, Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹8L - ₹17L / yr
skill iconJava
skill iconReact.js
skill iconAmazon Web Services (AWS)

Job Summary

We are looking for an experienced Java Full Stack Developer with strong expertise in Java, React.js, and AWS to design, develop, and maintain scalable web applications. The ideal candidate should have experience building high-performance applications and working across both front-end and back-end technologies.

 

Key Responsibilities

  • Develop and maintain full-stack web applications using Java and React.js
  • Design and build RESTful APIs and microservices using Java frameworks
  • Develop responsive and interactive frontend interfaces using React.js
  • Work with AWS services for deployment, scalability, and infrastructure
  • Collaborate with cross-functional teams including product managers, designers, and QA
  • Write clean, maintainable, and efficient code following best practices
  • Participate in code reviews, testing, debugging, and performance optimization
  • Implement CI/CD pipelines and cloud-based solutions

 

Required Skills

  • Strong experience in Java (Spring Boot / Spring Framework)
  • Good knowledge of React.js, JavaScript, HTML, CSS
  • Experience building REST APIs and microservices architecture
  • Hands-on experience with AWS services (EC2, S3, Lambda, RDS, etc.)
  • Familiarity with Git, CI/CD pipelines, and Agile development
  • Experience with database technologies (MySQL, PostgreSQL, or MongoDB)

 

Preferred Skills

  • Experience with Docker / Kubernetes
  • Knowledge of serverless architecture
  • Experience working in cloud-native environments
  • Understanding of system design and scalable architecture
Read more
MNK Global Corporate Solutions
Rithika Raghavan
Posted by Rithika Raghavan
Bengaluru (Bangalore)
5 - 7 yrs
₹15L - ₹20L / yr
skill iconPython
skill iconDjango
skill iconAmazon Web Services (AWS)

About the Role

We are looking for an experienced Senior Backend Developer to design and build scalable, secure, and high-performance backend systems. The ideal candidate will have deep expertise in Python/Django, microservices architecture, and cloud technologies, along with strong problem-solving skills and leadership capabilities.


Key Responsibilities

•Design and develop backend services using Django and Python.

•Architect and implement microservices-based solutions for scalability and maintainability.

•Work with PostgreSQL and Redis for efficient data storage and caching.

•Build and maintain RESTful APIs and ensure robust API design principles.

•Implement system design best practices for high availability and fault tolerance.

•Containerize applications using Docker and manage deployments with Kubernetes.

•Integrate with cloud platforms (AWS/Azure) for hosting and infrastructure management.

•Apply security best practices to protect data and application integrity.

•Collaborate with frontend, QA, and DevOps teams for seamless delivery.

•Mentor junior developers and conduct code reviews to maintain quality standards.


Required Skills & Expertise

•Django/Python – Advanced proficiency in backend development.

•Microservices Architecture – Strong understanding of distributed systems.

•PostgreSQL & Redis – Expertise in relational and in-memory databases.

•Docker/Kubernetes – Hands-on experience with containerization and orchestration.

•API Design & System Design – Ability to design scalable and secure systems.

•Cloud (AWS/Azure) – Practical experience with cloud services and deployments.

•Security Best Practices – Knowledge of authentication, authorization, and data protection.


Preferred Qualifications

•Experience with CI/CD pipelines and DevOps practices.

•Familiarity with message queues (e.g., RabbitMQ, Kafka).

•Exposure to monitoring tools (Prometheus, Grafana).


What We Offer

•Competitive salary and benefits.

•Opportunity to work on cutting-edge backend technologies.

•Collaborative and growth-oriented work environment.

Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
7 - 15 yrs
₹20L - ₹30L / yr
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
PySpark
databricks
+2 more

About TVARIT

TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.


Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.


Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.

· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.

· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.

· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.

· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards

· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.

· Utilize Docker and Kubernetes for scalable data processing.

· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.


Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.

· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.

. 2 years of team handling experience.

· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).

· Strong analytical and problem-solving skills with attention to detail.

· Good to have MLOps, DevOps including model lifecycle management

· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.

· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.

Read more
REConnect Energy

at REConnect Energy

4 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Bengaluru (Bangalore)
4.5 - 7 yrs
Upto ₹30L / yr (Varies
)
skill iconPython
MLOps
skill iconMachine Learning (ML)
SQL
skill iconAmazon Web Services (AWS)

About Us:

REConnect Energy’s GRIDConnect platform helps integrate and manage energy generation and consumption for 1000s of renewable energy assets and grid operators. We are currently serving customers across India, Bhutan and the Middle East with expansion planned in US and European markets.


We are headquartered in Central Bangalore with a team of 150+ and growing. You will join the Bangalore based Engineering team as a senior member and work at the intersection of Energy, Weather & Climate Sciences and AI. 


Responsibilities:

● Engineering - Take complete ownership of engineering stacks including Data Engineering and MLOps. Define and maintain software systems architecture for high availability 24x7 systems.

● Leadership - Lead a team of engineers and analysts managing engineering development as well as round the clock service delivery. Provide mentorship and technical guidance to team members and contribute towards their professional growth. Manage weekly and monthly reviews with team members and senior management.

● Product Development - Contribute towards new product development through engineering solutions to product requirements. Interact with cross-functional teams to bring forward a technology perspective.

● Operations - Manage delivery of critical services to power utilities with expectations of zero downtime. Take ownership for uninterrupted product uptime. 


Requirements:

● 4-5 years of experience building highly available systems

● 2-3 years experience leading a team of engineers and analysts

● Bachelors or Master’s degree in Computer Science, Software Engineering, Electrical Engineering or equivalent

● Proficient in python programming skills and expertise with data engineering and machine learning deployment

● Experience in databases including MySQL and NoSQL

● Experience in developing and maintaining critical and high availability systems will be given strong preference

● Experience in software design using design principles and architectural modeling.

● Experience working with AWS cloud platform.

● Strong analytical and data driven approach to problem solving 

Read more
NeoGenCode Technologies Pvt Ltd
Mumbai
5 - 10 yrs
₹12L - ₹24L / yr
DevOps
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
skill iconKubernetes
+12 more

Job Title : Senior DevOps Engineer (Only Mumbai Candidates)

Experience : 5+ Years

Location : Mumbai (On-site)

Notice Period : Immediate to 15 Days

Interview Process : 1 Internal Round + 1 Client Round


Mandatory Skills :

Multi-Cloud (AWS/GCP/Azure – any two), Kubernetes, Terraform, Helm (writing Helm Charts), CI/CD (GitLab CI/Jenkins/GitHub Actions), GitOps (ArgoCD/FluxCD), Multi-tenant deployments, Stateful microservices on Kubernetes, Enterprise Linux.


Role Overview :

We are looking for a Senior DevOps Engineer to design, build, and manage scalable cloud infrastructure and DevOps pipelines for product-based platforms.

The ideal candidate should have strong experience with Kubernetes, Terraform, Helm Charts, CI/CD, and GitOps practices.


Key Responsibilities :

  • Design and manage scalable cloud infrastructure across AWS/GCP/Azure.
  • Deploy and manage microservices on Kubernetes clusters.
  • Build and maintain Infrastructure as Code using Terraform and Helm.
  • Implement CI/CD pipelines using GitLab CI, Jenkins, or GitHub Actions.
  • Implement GitOps workflows using ArgoCD or FluxCD.
  • Ensure secure, scalable, and reliable DevOps architecture.
  • Implement monitoring and logging using Prometheus, Grafana, or ELK.

Good to Have :

  • Packer, OpenShift/Rancher/K3s, On-prem deployments, PaaS experience, scripting (Bash/Python), Terraform modules.
Read more
ManpowerGroup
Shirisha Jangi
Posted by Shirisha Jangi
Bengaluru (Bangalore), Hyderabad
7 - 15 yrs
₹20L - ₹27L / yr
Data engineering
skill iconJava
skill iconPython
SQL
skill iconScala
+3 more

Immediate hiring for Senior Data Engineer

📍 Location: Hyderabad/Bangalore

💼 Experience: 7+Years

🕒 Employment Type: Full-Time

🏢 Work Mode: Hybrid

📅 Notice Period: 0-1Month serving notice only

 

   We are seeking a highly skilled and motivated Data Engineer to join our innovative team. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure to support our enterprise-wide data-driven initiatives. You will collaborate closely with cross-functional teams to ensure the availability, reliability, and performance of our data systems and solutions.

 

🔎 Key Responsibilities:

  • Data Pipeline Development
  • Data Modeling and Architecture
  • Data Integration and API Development
  • Data Infrastructure Management
  • Collaboration and Documentation

 

🎯 Required Skills:

  • Bachelor’s degree in computer science, Engineering, Information Systems, or a related field.
  • 7+ years of proven experience in data engineering, software development, or related technical roles.
  • 7+ years of experience in programming languages commonly used in data engineering (Python, Java, SQL, Stored Procedures, Scala, etc.).
  • 7+ years of experience with database systems, data modeling, and advanced SQL.
  • 7+ years of experience with ETL tools such as SSIS, Snowflake, Databricks, Azure Data Factory, Stored Procedures, etc.
  • Experience with big data technologies such as Hadoop, Spark, Kafka, etc.
  • 5+ years of experience working with cloud platforms like Azure, AWS, or Google Cloud.
  • Strong analytical, problem-solving, and debugging skills with high attention to detail.
  • Excellent communication and collaboration skills in a team-oriented, fast-paced environment.
  • Ability to adapt to rapidly evolving technologies and business requirements.

 

 

Read more
Product development MNC
Hyderabad
12 - 20 yrs
₹45L - ₹60L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconAmazon Web Services (AWS)
Fullstack Developer
TypeScript

Work Mode: 5 days in office

Notice: Max 30 days

*1 final round will be in-person


Responsibilities

●      Own and champion the development process of our web-based applications, including: SDLC, coding standards, code reviews, check-ins and builds, issue tracking, bug triages, incident management. and testing.

●      Build and maintain a high-performing software development team including hiring, training, and onboarding.

●      Identify opportunities to eliminate non-value add activities to enable our developers to do what they love best—developing! No pointless meetings, no unnecessary interruptions, no random changes of course, no new problems from on high dumped in their lap each month.

●      Identify growth opportunities for team members to continue to learn and develop in a supportive environment.

●      Provide an engaging and challenging landscape for career growth.

●      Provide leadership, mentorship, and motivation to the engineering team to sustain high levels of productivity and morale.

●      Collaborate with Product Management on product requirements.

●      Champion and advocate for the engineering team to the rest of the organization.

●      Create a positive culture of fairness, quality, and accountability while challenging the status quo and bringing new ideas to light.

●      Participate as a member of company’s Engineering Leadership team to build a high performing organization across multiple locations.

 

Requirements

●      12+ years of software development experience, 2+ years of development leadership experience.

●      Demonstrated technical leadership and people management skills.

●      Experience with agile development processes.

●      Hands-on experience in driving/leading technical efforts in cloud-based applications.

●      Proven track record of driving quality within a team, with a commitment to automated testing.

●      Strong communication skills with the ability to effectively influence product at different levels of abstraction and communicate to both technical and non-technical audiences.

●      Excellent coding skills to provide guidance and craftsmanship for our engineers

●      Technical acumen to provide solid judgment in situations so you can provide the optimal short term decisions without sacrificing long term technology goals

●      Demonstrated critical analysis skills to provide continuous improvement of technology, process, and productivity

 

Technical Experience

We are looking for someone who has experience working in environments that utilize some of the following technologies:

●      AWS & Azure

●      Typescript

●      Node.js

●      React.js

●      Material UI

●      Jira

●      GitHub

●      CI/CD

●      SQL (MySQL, PostgreSQL, SQL Server)

●      MongoDB

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
1 - 4 yrs
Upto ₹22L / yr (Varies
)
Automation
Terraform
skill iconPython
skill iconNodeJS (Node.js)
skill iconAmazon Web Services (AWS)

The Software Engineer – SRE will be responsible for building and maintaining highly reliable, scalable, and secure infrastructure that powers the Albert platform. This role focuses on automation, observability, and operational excellence to ensure seamless deployment, performance, and reliability of core platform services.


Responsibilities

  • Act as a passionate representative of the Albert product and brand.
  • Work closely with Product Engineering and other stakeholders to plan and deliver core platform capabilities that enable scalability, reliability, and developer productivity.
  • Work with the Site Reliability Engineering (SRE) team on shared full-stack ownership of a collection of services and/or technology areas.
  • Understand the end-to-end configuration, technical dependencies, and overall behavioral characteristics of all microservices.
  • Be responsible for the design and delivery of the mission-critical stack with a focus on security, resiliency, scale, and performance.
  • Own end-to-end performance and operability.
  • Demonstrate a clear understanding of automation and orchestration principles.
  • Act as the escalation point for complex or critical issues that have not yet been documented as Standard Operating Procedures (SOPs).
  • Use a deep understanding of service topology and dependencies to troubleshoot issues and define mitigations.

Requirements

  • Bachelor’s degree in Computer Science, Engineering, or equivalent experience.
  • 1+ years of software engineering experience, with at least 1 year in an SRE role focused on automation.
  • Strong experience with Infrastructure as Code (IaC), preferably using Terraform.
  • Strong expertise in Python or Node.js, including designing RESTful APIs and microservices architecture.
  • Strong expertise in cloud infrastructure (AWS) and platform technologies including microservices, APIs, and distributed systems.
  • Hands-on experience with observability stacks including centralized log management, metrics, and tracing.
  • Familiarity with CI/CD tools such as CircleCI and performance testing using K6.
  • Passion for bringing more automation and engineering standards to organizations.
  • Experience building high-performance APIs with low latency (<200 ms).
  • Ability to work in a fast-paced environment and collaborate with peers and leaders.
  • Ability to lead technically, mentor engineers, and contribute to hiring and team growth.

Good to Have

  • Experience with Kubernetes and container orchestration.
  • Familiarity with observability tools such as Prometheus, Grafana, OpenTelemetry, Datadog.
  • Experience building internal developer platforms (IDPs) or reusable engineering frameworks.
  • Exposure to ML infrastructure or data engineering workflows.
  • Experience working in compliance-heavy environments (SOC2, HIPAA, etc.).


About Albert Invent


Albert Invent is a cutting-edge AI-driven software company headquartered in Oakland, California, on a mission to empower scientists and innovators in chemistry and materials science to invent the future faster. Scientists in 30+ countries use Albert to accelerate R&D with AI trained like a chemist, helping bring better products to market faster.

Why Join Albert Invent

  • Work with a mission-driven, fast-growing global team at the intersection of AI, data, and advanced materials science.
  • Collaborate with world-class scientists and technologists to redefine how new materials are discovered and developed.
  • Culture built on curiosity, collaboration, ownership, and continuous learning.
  • Opportunity to build cutting-edge AI tools that accelerate real-world R&D and solve global challenges such as sustainability and advanced manufacturing.


Read more
Pace Wisdom Solutions
Bengaluru (Bangalore)
7 - 10 yrs
₹15L - ₹30L / yr
skill icon.NET
ASP.NET
ASP.NET MVC
MVC Framework
skill iconAmazon Web Services (AWS)
+1 more

Location: Bangalore

Experience required: 7-10 years.

Key skills: .NET core, ASP .NET, Microsoft Azure, MVC, AWS


"At Pace Wisdom Solutions, our .NET team is a dynamic and collaborative group of experts specializing in end-to-end development. With a focus on both front-end and back-end technologies, we leverage the robust .NET framework and Azure to deliver innovative and scalable solutions. Our agile approach ensures adaptability to industry changes, empowering us to provide clients with cutting-edge and tailored applications."


We are seeking a highly skilled and experienced Senior .NET Developer with a minimum of 7 years of hands-on experience. The ideal candidate will possess expertise in both front-end and back-end development, with a strong background in MVC architecture and exposure to Microsoft Azure technologies. The role requires an individual who can work independently, lead a team effectively, and contribute to the successful delivery of projects.


Engineering Culture at Pace Wisdom:

We foster a collaborative and communicative environment where engineers are empowered to share ideas freely. Teamwork is paramount, and we believe the best solutions come from diverse perspectives. We are committed to promoting from within, providing clear career paths and mentorship opportunities to help our engineers reach their full potential. Our culture prioritizes continuous learning and growth, offering a safe space to experiment, innovate, and refine your skills.


Responsibilities:

• Create scalable solutions by understanding business requirements, write code, test according to best practices.

• Own and Collaborate with the team including our customers, QA, design, and other stakeholders to drive successful project delivery.

• Advocate and mentor teams to follow best practices around: documentation, unit testing, code reviews etc.

• Comply with security policies and processes.


Qualifications:

• 7-10 years of professional experience in developing applications using .NET framework, .NET Core, Azure Services, Entity Framework

• Good knowledge of common software architecture design patterns, Object Oriented Programming, Data structures, Algorithms, Database design patterns and other best practices.

• Exposure to Cloud technologies (AWS, Azure, Google Cloud - at least one of them)

• Exposure to developing SPA on React, Angular or VueJS

• Experience with micro services, messaging systems (RabbitMQ/Kafka)

• Proven ability to lead and mentor development teams.

• Effective communication and interpersonal skills.


About the Company:

Pace Wisdom Solutions is a deep-tech Product engineering and consulting firm. We have offices in San Francisco, Bengaluru, and Singapore. We specialize in designing and developing bespoke software solutions that cater to solving niche business problems.


We engage with our clients at various stages:

• Right from the idea stage to scope out business requirements.

• Design & architect the right solution and define tangible milestones.

• Setup dedicated and on-demand tech teams for agile delivery.

• Take accountability for successful deployments to ensure efficient go-to-market Implementations.


Pace Wisdom has been working with Fortune 500 Enterprises and growth-stage startups/SMEs since 2012. We also work as an extended Tech team and at times we have played the role of a Virtual CTO too. We believe in building lasting relationships and providing value-add every time and going beyond business. 

Read more
Hyderabad
5 - 8 yrs
₹15L - ₹30L / yr
ETL
Snowflake
skill iconPython
SQL
Fivetran
+4 more

Role Overview


We are looking for a Senior Data Quality Engineer who is passionate about building reliable and scalable data platforms. In this role, you will ensure high-quality, trustworthy data across pipelines and analytics systems by designing robust data ingestion frameworks, implementing data quality checks, and optimizing data transformations.

You will work closely with data engineers, analytics teams, and product stakeholders to ensure data accuracy, consistency, and reliability across the organization.


Key Responsibilities


  • Cleanse, normalize, and enhance data quality across operational systems and new data sources flowing through the data platform.
  • Design, build, monitor, and maintain ETL/ELT pipelines using Python, SQL, and Airflow.
  • Develop and optimize data models, tables, and transformations in Snowflake.
  • Build and maintain data ingestion workflows, including API integrations, file ingestion, and database connectors.
  • Ensure data reliability, integrity, and performance across pipelines.
  • Perform comprehensive data profiling to understand data structures, detect anomalies, and resolve inconsistencies.
  • Implement data quality validation frameworks and automated checks across pipelines.
  • Use data integration and data quality tools such as Deequ, Great Expectations (GX), Splink, Fivetran, Workato, Informatica, etc., to onboard new data sources.
  • Troubleshoot pipeline failures and implement data monitoring and alerting mechanisms.
  • Collaborate with engineering, analytics, and product teams in an Agile development environment.


Required Technical Skills


Core Technologies


  • Strong hands-on experience with SQL
  • Python for data transformation and pipeline development
  • Workflow orchestration using Apache Airflow
  • Experience working with Snowflake data warehouse


Data Engineering Expertise


  • Strong understanding of ETL / ELT pipeline design
  • Data profiling and data quality validation techniques
  • Experience building data ingestion pipelines from APIs, files, and databases
  • Data modeling and schema design


Tools & Platforms


  • Data Quality Tools: Deequ, Great Expectations (GX), Splink
  • Data Integration Tools: Fivetran, Workato, Informatica
  • Cloud Platforms: AWS (preferred)
  • Version Control & DevOps: Git, CI/CD pipelines


Qualifications


  • 5–8 years of experience in Data Quality Engineering / Data Engineering
  • Strong expertise in SQL, Python, Airflow, and Snowflake
  • Experience working with large-scale datasets and distributed data systems
  • Solid understanding of data engineering best practices across the development lifecycle
  • Experience working in Agile environments (Scrum, sprint planning, etc.)
  • Strong analytical and problem-solving skills


What We Look For


  • Passion for data accuracy, reliability, and governance
  • Ability to identify and resolve complex data issues
  • Strong collaboration skills across data, engineering, and analytics teams
  • Ownership mindset and attention to data integrity and performance


Why Join Us


  • Opportunity to work on modern data platforms and large-scale datasets
  • Collaborate with high-performing data and engineering teams
  • Exposure to cloud data architecture and modern data tools
  • Competitive compensation and strong career growth opportunities
Read more
HireTo
Rishita Sharma
Posted by Rishita Sharma
Hyderabad
5 - 13 yrs
₹15L - ₹30L / yr
snowflake
skill iconPython
SQL
Windows Azure
databricks
+4 more

Position Title : Senior Data Engineer(Founding Member) - Insurtech StartUp

Location : Hyderabad(Onsite)

Immediate to 15 days Joiners

Experience : 5+ to 13 Years

Role Summary

We are looking for a Senior Data Engineer who will play a foundational role in:

  • Client onboarding from a data perspective
  • Understanding complex insurance data flows
  • Designing secure, scalable ingestion pipelines
  • Establishing strong data modeling and governance standards

This role sits at the intersection of technology, data architecture, security, and business onboarding.

.

Key Responsibilities

  • Lead end-to-end data onboarding for new clients and partners, working closely with business and product teams to understand client systems, data formats, and migration constraints
  • Define and implement data ingestion strategies supporting multiple sources and formats, including CSV, XML, JSON files, and API-based integrations
  • Design, build, and operate robust, scalable ETL/ELT pipelines, supporting both batch and near-real-time data processing
  • Handle complex insurance-domain data including Contracts, Claims, Reserves, Cancellations, and Refunds
  • Architect ingestion pipelines with security-by-design principles, including secure credential management (keys, secrets, tokens), encryption at rest and in transit, and network-level controls where required
  • Enforce role-based and attribute-based access controls, ensuring strict data isolation, tenancy boundaries, and stakeholder-specific access rules
  • Design, maintain, and evolve canonical data models that support operational workflows, reporting & analytics, and regulatory/audit requirements
  • Define and enforce data governance standards, ensuring compliance with insurance and financial data regulations and consistent definitions of business metrics across stakeholders
  • Build and operate data pipelines on a cloud-native platform, leveraging distributed processing frameworks (Spark / PySpark), data lakes, lakehouses, and warehouses
  • Implement and manage orchestration, monitoring, alerting, and cost-optimization mechanisms across the data platform
  • Contribute to long-term data strategy, platform architecture decisions, and cost-optimization initiatives while maintaining strict security and compliance standards

Required Technical Skills

  • Core Stack: Python, Advanced SQL(Complex joins, window functions, performance tuning), Pyspark
  • Platforms: Azure, AWS, Data Bricks, Snowflake
  • ETL / Orchestration: Airflow or similar frameworks
  • Data Modeling: Star/Snowflake schema, dimensional modeling, OLAP/OLTP
  • Visualization Exposure: Power BI
  • Version Control & CI/CD: GitHub, Azure Devops, or equivalent
  • Integrations: APIs, real-time data streaming, ML model integration exposure

Preferred Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, or related field
  • 5+ years of experience in data engineering or similar roles
  • Strong ability to align technical solutions with business objectives
  • Excellent communication and stakeholder management skills

What We Offer

  • Direct collaboration with the core US data leadership team
  • High ownership and trust to manage the function end-to-end
  • Exposure to a global environment with advanced tools and best practices
Read more
Remote only
2 - 7 yrs
₹5L - ₹15L / yr
DevOps
CI/CD
skill iconDocker
skill iconKubernetes
skill iconAmazon Web Services (AWS)
+8 more

BluePMS Software Solutions Pvt Ltd is hiring a talented DevOps Engineer to join our growing engineering team. In this role, you will be responsible for building and maintaining scalable infrastructure, automating deployment processes, and improving the reliability of our software delivery pipelines.


KeyResponsibilities:

 1: Design, build, and maintain CI/CD pipelines for faster and reliable deployments.

 2: Manage and monitor cloud infrastructure and servers.

 3: Automate build, testing, and deployment processes.

 4: Collaborate with development and QA teams to improve release cycles.

 5: Monitor system performance and ensure high availability and reliability.

 6: Troubleshoot infrastructure and deployment issues.

 7: Implement security best practices in DevOps workflows.


RequiredSkills:

 1: Strong understanding of DevOps principles and CI/CD pipelines.

 2: Experience with Docker, Kubernetes, or containerization technologies.

 3: Familiarity with cloud platforms such as AWS, Azure, or GCP.

 4: Experience with Git, Jenkins, GitHub Actions, or similar tools.

 5: Basic scripting knowledge (Bash, Python, or Shell).

 6: Good understanding of Linux systems and networking concepts.


Eligibility:

 1: Experience: 2 – 7 years

 2: Qualification: Bachelor's degree in Computer Science, IT, or related field

 3: Strong analytical and problem-solving skills.


Location: Chennai / Remote


Apply here: https://connectsblue.com/jobs/753/devops-engineer-at-bluepms-software-solutions-pvt-ltd

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 15 yrs
₹12L - ₹15L / yr
Tableau
Snow flake schema
SQL
ETL
Data modeling
+4 more

Job Description:

Position Type: Full-Time Contract (with potential to convert to Permanent)

Location: Remote (Australian Time Zone)

Availability: Immediate Joiners Preferred

About the Role

We are seeking an experienced Tableau and Snowflake Specialist with 5+ years of hands‑on expertise to join our team as a full‑time contractor for the next few months. Based on performance and business requirements, this role has a strong potential to transition into a permanent position.

The ideal candidate is highly proficient in designing scalable dashboards, managing Snowflake data warehousing environments, and collaborating with cross-functional teams to drive data‑driven insights.

Key Responsibilities

  • Develop, design, and optimize advanced Tableau dashboards, reports, and visual analytics.
  • Build, maintain, and optimize datasets and data models in Snowflake Cloud Data Warehouse.
  • Collaborate with business stakeholders to gather requirements and translate them into analytics solutions.
  • Write efficient SQL queries, stored procedures, and data pipelines to support reporting needs.
  • Perform data profiling, data validation, and ensure data quality across systems.
  • Work closely with data engineering teams to improve data structures for better reporting efficiency.
  • Troubleshoot performance issues and implement best practices for both Snowflake and Tableau.
  • Support deployment, version control, and documentation of BI solutions.
  • Ensure availability of dashboards during Australian business hours.

Required Skills & Experience

  • 5+ years of strong hands-on experience with Tableau development (Dashboards, Storyboards, Calculated Fields, LOD Expressions).
  • 5+ years of experience working with Snowflake including schema design, warehouse configuration, and query optimization.
  • Advanced knowledge of SQL and performance tuning.
  • Strong understanding of data modeling, ETL processes, and cloud data platforms.
  • Experience working in fast-paced environments with tight delivery timelines.
  • Excellent communication and stakeholder management skills.
  • Ability to work independently and deliver high‑quality outputs aligned with business objectives.

Nice-to-Have Skills

  • Knowledge of Python or any ETL tool.
  • Experience with Snowflake integrations (Fivetran, DBT, Azure/AWS/GCP).
  • Tableau Server/Prep experience.

Contract Details

  • Full-Time Contract for several months.
  • High possibility of conversion to permanent, based on performance.
  • Must be available to work on the Australian Time Zone.
  • Immediate joiners are highly encouraged.


Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Bengaluru (Bangalore)
3 - 6 yrs
₹15L - ₹25L / yr
skill iconPython
skill iconGo Programming (Golang)
skill iconJava
skill iconAmazon Web Services (AWS)



We’re Hiring Backend Developers | Java / Go / Python | 3–5 Years | Bangalore

We are expanding our engineering team and looking for talented Backend Developers with 3–5 years of experience to join us in Bangalore.

If you enjoy building scalable systems, working with modern cloud technologies, and solving complex problems, this opportunity is for you!


💼 Position

Backend Developer (Java / Go / Python)

📍 Location: Bangalore

👨‍💻 Experience: 3–5 Years

🔎 What You Bring

✔ Strong proficiency in Go or similar backend languages like Python with Fast API or JAVA with Springboot .

✔ Experience designing RESTful APIs

✔ Hands-on experience with AWS / GCP

✔ Experience working with PostgreSQL, Redis, Kafka, or SQS

✔ Strong experience with Microservices architecture

✔ Hands-on experience with CI/CD pipelines

✔ Experience with containerized environments (Docker / Kubernetes)

✔ Familiarity with monitoring tools like Prometheus, Grafana, and Spring Actuator

✔ Strong understanding of data structures, algorithms, and system design fundamentals

✔ Ability to own features end-to-end and solve complex engineering problems

✔ Strong focus on code quality, observability, and operational ownership

✔ Comfortable working in fast-paced, high-growth environments





Read more
TVARIT GmbH

at TVARIT GmbH

2 candid answers
DrSoumya Sahadevan
Posted by DrSoumya Sahadevan
Pune
5 - 15 yrs
₹20L - ₹38L / yr
skill iconReact.js
API
AWS CloudFormation
skill iconDjango
skill iconNodeJS (Node.js)
+7 more

Availability: Full time 

Location: Pune, India 

Experience: 5- 6 years

 

Tvarit Solutions Private Limited (wholly owned subsidiary of TVARIT GmbH, Germany). TVARIT provides software to reduce manufacturing waste like scrap, energy, and machine downtime using its patented technology. With its software products, and highly competent team from renowned Universities, TVARIT has gained customer trust across 4 continents within a short span of 3 years. TVARIT is awarded among the top 8 out of 490 AI companies by European Data Incubator, apart from many more awards by the German government and industrial organizations making TVARIT one of the most innovative AI companies in Germany and Europe.  

 

We are looking for a passionate Full Stack Developer Level 2 to join our technology team in Pune Centre. You will be responsible for handling architecting, design, development, testing, leading the software development team and working toward infrastructure development that will support the company’s solutions. You will get an opportunity to work closely on projects that will involve the automation of the manufacturing process. 

 

Key Responsibilities 

· Full Stack Development: Design, develop, and maintain scalable web applications using React with TypeScript for the frontend and Node.js/Python for the backend.

· AI Integration: Collaborate with data scientists and ML engineers to integrate AI/ML models into the SaaS platform, ensuring seamless performance and usability.

· API Development & Optimization: Build and optimize high-performance REST APIs in Node.js and Python (Django, Flask, or FastAPI) to support real-time data processing and analytics.

· Database Engineering: Design, manage, and optimize data storage using relational (PostgreSQL), NoSQL (MongoDB/DynamoDB), graph, and vector databases for handling complex industrial data.

· Cloud-Native Deployment: Deploy, monitor, and manage services in containerized environments using Docker and Kubernetes on Linux-based systems (Ubuntu/Debian).

· System Architecture & Design: Contribute to architectural decisions, leveraging OOPs, microservices, domain-driven design, and design patterns to ensure scalability, security, and maintainability.

· Data Handling & Processing: Work with large-scale manufacturing datasets using Python (pandas) to enable predictive analytics and AI-driven insights.

· Collaboration & Agile Delivery: Partner with cross-functional teams—including product managers, manufacturing domain experts, and AI researchers—to translate business needs into technical solutions.

· Performance & Security: Ensure robust, secure, and high-performance software by implementing best practices in algorithms, data structures, and system design.

· Continuous Improvement: Stay updated on emerging technologies in AI, SaaS, and manufacturing systems to propose innovative solutions that enhance product capability.

 

Must have worked on these technologies.

· 5+ years of experience working with React-Typescript, node.js on a production level

· Python, pandas, High performance REST APIs in node and Python (in Django or Flask or Fast API)

· Databases: Relational DB like PostgreSQL, No SQL DB like Mongo or Dynamo DB, Vector databases, Graphs DBs

· OS: Linux flavor like Ubuntu, Debian

· Source Control and CI/CD

· Software Fundamentals: Excellent command on Algorithms and Data Structures

· Software design and Architecture: OOPs, Design Patterns, Micro Services, monolithic architectures, Domain driven Design

· Containers: Docker and Kubernetes

· Cloud: Fundamentals of AWS like S3 buckets, EC2, IAMs, Security groups


Benefits and Perks:

· Be part of the product which is transforming the manufacturing landscape with AI

· Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.

· Progressive leave policy for effective work-life balance.

· Get mentored by highly qualified internal resource groups and opportunities to avail industry-driven mentorship programs.

· Multicultural peer groups and supportive workplace policies. 

· Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.

 

 

 

How it's like to work for a Startup?

Working for TVARIT (deep-tech German IT Startup) can offer you a unique blend of innovation, collaboration, and growth opportunities. But it's essential to approach it with a willingness to adapt and thrive in a dynamic environment.

 

If this position sparked your interest, do apply today!

Read more
Bengaluru (Bangalore)
1 - 2 yrs
₹5L - ₹6L / yr
skill iconAmazon Web Services (AWS)
DevOps

AWS DevOps Engineer (1–2 Years Experience)

We are looking for a motivated AWS DevOps Engineer with 1–2 years of experience to join our team. The ideal candidate should have hands-on experience with cloud infrastructure, CI/CD pipelines, and automation tools.

Key Responsibilities

  • Manage and maintain cloud infrastructure on Amazon Web Services
  • Build and manage CI/CD pipelines for automated deployments
  • Work with containerization tools like Docker
  • Assist in deployment and orchestration using Kubernetes
  • Monitor applications and infrastructure performance
  • Collaborate with development teams to improve deployment processes
  • Automate infrastructure using scripts and DevOps tools

Required Skills

  • 1–2 years experience in DevOps or Cloud Engineering
  • Strong knowledge of Amazon Web Services services such as EC2, S3, IAM
  • Experience with CI/CD tools like Jenkins or GitHub Actions
  • Knowledge of container tools like Docker
  • Familiarity with version control systems like Git
  • Basic scripting knowledge (Shell / Python)

Good to Have

  • Experience with Infrastructure as Code tools like Terraform
  • Knowledge of monitoring tools such as Prometheus or Grafana
  • Understanding of Linux environments


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Monika Sekaran
Posted by Monika Sekaran
Pune
7 - 11 yrs
Best in industry
skill iconJava
skill iconSpring Boot
skill iconAmazon Web Services (AWS)
Microservices
Design patterns
+2 more

Key Responsibilities:

  • Design, develop, and maintain scalable backend applications using Java and Spring Boot.
  • Build and consume RESTful APIs and ensure secure, reliable API integrations.
  • Develop microservices-based architecture and deploy applications in cloud environments.
  • Work with cloud platforms such as AWS/Azure/GCP for application deployment and management.
  • Write clean, maintainable, and efficient code following best practices.
  • Implement CI/CD pipelines and support DevOps practices.
  • Optimize applications for performance, scalability, and reliability.
  • Collaborate with cross-functional teams including frontend, QA, DevOps, and product teams.
  • Participate in code reviews, technical design discussions, and architectural decisions.
  • Troubleshoot production issues and provide timely resolution.

Required Skills & Qualifications:

  • 5–10 years of hands-on experience in Java (Java 8 or above).
  • Strong experience with Spring Boot, Spring MVC, Spring Data, Spring Security.
  • Solid understanding of RESTful API design & development.
  • Experience in microservices architecture.
  • Hands-on experience with at least one cloud platform (AWS / Azure / GCP).
  • Knowledge of containerization tools like Docker and orchestration tools like Kubernetes.
  • Experience with relational and/or NoSQL databases (MySQL, PostgreSQL, MongoDB).
  • Familiarity with CI/CD tools (Jenkins, GitHub Actions, etc.).
  • Strong understanding of Git and version control practices.
  • Good understanding of design patterns and object-oriented programming principles.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort