50+ SQL Jobs in India
Apply to 50+ SQL Jobs on CutShort.io. Find your next job, effortlessly. Browse SQL Jobs and apply today!
Like us, you'll be deeply committed to delivering impactful outcomes for customers.
- 7+ years of demonstrated ability to develop resilient, high-performance, and scalable code tailored to application usage demands.
- Ability to lead by example with hands-on development while managing project timelines and deliverables. Experience in agile methodologies and practices, including sprint planning and execution, to drive team performance and project success.
- Deep expertise in Node.js, with experience in building and maintaining complex, production-grade RESTful APIs and backend services.
- Experience writing batch/cron jobs using Python and Shell scripting.
- Experience in web application development using JavaScript and JavaScript libraries.
- Have a basic understanding of Typescript, JavaScript, HTML, CSS, JSON and REST based applications.
- Experience/Familiarity with RDBMS and NoSQL Database technologies like MySQL, MongoDB, Redis, ElasticSearch and other similar databases.
- Understanding of code versioning tools such as Git.
- Understanding of building applications deployed on the cloud using Google cloud platform(GCP)or Amazon Web Services (AWS)
- Experienced in JS-based build/Package tools like Grunt, Gulp, Bower, Webpack.
We are looking for a Python Backend Developer to design, build, and maintain scalable backend services and APIs. The role involves working with modern Python frameworks, databases (SQL and NoSQL), and building well-tested, production-grade systems.
You will collaborate closely with frontend developers, AI/ML engineers, and system architects to deliver reliable and high-performance backend solutions.
Key Responsibilities
- Design, develop, and maintain backend services using Python
- Build and maintain RESTful APIs using FastAPI
- Design efficient data models and queries using MongoDB and SQL databases (PostgreSQL/MySQL)
- Ensure high performance, security, and scalability of backend systems
- Write unit tests, integration tests, and API tests to ensure code reliability
- Debug, troubleshoot, and resolve production issues
- Follow clean code practices, documentation, and version control workflows
- Participate in code reviews and contribute to technical discussions
- Work closely with cross-functional teams to translate requirements into technical solutions
Required Skills & Qualifications
Technical Skills
- Strong proficiency in Python
- Hands-on experience with FastAPI
- Experience with MongoDB (schema design, indexing, aggregation)
- Solid understanding of SQL databases and relational data modelling
- Experience writing and maintaining automated tests
- Unit testing (e.g., pytest)
- API testing
- Understanding of REST API design principles
- Familiarity with Git and collaborative development workflows
Good to Have
- Experience with async programming in Python (async/await)
- Knowledge of ORMs/ODMs (SQLAlchemy, Tortoise, Motor, etc.)
- Basic understanding of authentication & authorisation (JWT, OAuth)
- Exposure to Docker / containerised environments
- Experience working in Agile/Scrum teams
What We Value
- Strong problem-solving and debugging skills
- Attention to detail and commitment to quality
- Ability to write testable, maintainable, and well-documented code
- Ownership mindset and willingness to learn
- Teamwork
What We Offer
- Opportunity to work on real-world, production systems
- Technically challenging problems and ownership of components
- Collaborative engineering culture
Review Criteria
- Strong Data Scientist/Machine Learnings/ AI Engineer Profile
- 2+ years of hands-on experience as a Data Scientist or Machine Learning Engineer building ML models
- Strong expertise in Python with the ability to implement classical ML algorithms including linear regression, logistic regression, decision trees, gradient boosting, etc.
- Hands-on experience in minimum 2+ usecaseds out of recommendation systems, image data, fraud/risk detection, price modelling, propensity models
- Strong exposure to NLP, including text generation or text classification (Text G), embeddings, similarity models, user profiling, and feature extraction from unstructured text
- Experience productionizing ML models through APIs/CI/CD/Docker and working on AWS or GCP environments
- Preferred (Company) – Must be from product companies
Job Specific Criteria
- CV Attachment is mandatory
- What's your current company?
- Which use cases you have hands on experience?
- Are you ok for Mumbai location (if candidate is from outside Mumbai)?
- Reason for change (if candidate has been in current company for less than 1 year)?
- Reason for hike (if greater than 25%)?
Role & Responsibilities
- Partner with Product to spot high-leverage ML opportunities tied to business metrics.
- Wrangle large structured and unstructured datasets; build reliable features and data contracts.
- Build and ship models to:
- Enhance customer experiences and personalization
- Boost revenue via pricing/discount optimization
- Power user-to-user discovery and ranking (matchmaking at scale)
- Detect and block fraud/risk in real time
- Score conversion/churn/acceptance propensity for targeted actions
- Collaborate with Engineering to productionize via APIs/CI/CD/Docker on AWS.
- Design and run A/B tests with guardrails.
- Build monitoring for model/data drift and business KPIs
Ideal Candidate
- 2–5 years of DS/ML experience in consumer internet / B2C products, with 7–8 models shipped to production end-to-end.
- Proven, hands-on success in at least two (preferably 3–4) of the following:
- Recommender systems (retrieval + ranking, NDCG/Recall, online lift; bandits a plus)
- Fraud/risk detection (severe class imbalance, PR-AUC)
- Pricing models (elasticity, demand curves, margin vs. win-rate trade-offs, guardrails/simulation)
- Propensity models (payment/churn)
- Programming: strong Python and SQL; solid git, Docker, CI/CD.
- Cloud and data: experience with AWS or GCP; familiarity with warehouses/dashboards (Redshift/BigQuery, Looker/Tableau).
- ML breadth: recommender systems, NLP or user profiling, anomaly detection.
- Communication: clear storytelling with data; can align stakeholders and drive decisions.
Review Criteria:
- Strong Software Engineer fullstack profile using NodeJS / Python and React
- 6+ YOE in Software Development using Python OR NodeJS (For backend) & React (For frontend)
- Must have strong experience in working on Typescript
- Must have experience in message-based systems like Kafka, RabbitMq, Redis
- Databases - PostgreSQL & NoSQL databases like MongoDB
- Product Companies Only
- Tier 1 Engineering Institutes preferred (IIT, NIT, BITS, IIIT, DTU or equivalent)
Preferred:
- Experience in Fin-Tech, Payment, POS and Retail products is highly preferred
- Experience in mentoring, coaching the team.
Role & Responsibilities:
We are currently seeking a Senior Engineer to join our Financial Services team, contributing to the design and development of scalable system.
The Ideal Candidate Will Be Able To-
- Take ownership of delivering performant, scalable and high-quality cloud-based software, both frontend and backend side.
- Mentor team members to develop in line with product requirements.
- Collaborate with Senior Architect for design and technology choices for product development roadmap.
- Do code reviews.
Ideal Candidate:
- Thorough knowledge of developing cloud-based software including backend APIs and react based frontend.
- Thorough knowledge of scalable design patterns and message-based systems such as Kafka, RabbitMq, Redis, MongoDB, ORM, SQL etc.
- Experience with AWS services such as S3, IAM, Lambda etc.
- Expert level coding skills in Python FastAPI/Django, NodeJs, TypeScript, ReactJs.
- Eye for user responsive designs on the frontend.
We are seeking an experienced and highly skilled Java (Fullstack) Engineer to join our team.
The ideal candidate will have a strong background in both Back-end JAVA, Spring-boot, Spring Framework & Frontend Javascript, React or Angular with ability to salable high performance applications.
Responsibilities
- Develop, test, and deploy scalable and robust backend services Develop, test & deploy scalable & robust back-end services using JAVA & Spring-boot
- Build responsive & user friendly front-end applications using modern Java-script framework with React
- or Angular
- Collaborate with architects & team members to design salable, maintainable & efficient systems.
- Contribute to architectural decisions for micro-services, API’s & cloud solutions.
- Implement & maintain Restful API for seamless integration.
- Write clean, efficient & res-usable code adhering to best practices
- Conduct code reviews, performance optimizations & debugging
- Work with cross functional teams, including UX/UI designers, product managers & QA team.
- Mentor junior developers & provide technical guidance.
Skills & Requirements
- Minimum 3 Years of experience in backend/ fullstack development
- Back-end - Core JAVA/JAVA8, Spring-boot, Spring Framework, Micro-services, Rest API’s, Kafka,
- Front-end - JavaScript, HTML, CSS, Typescript, Angular
- Database - MySQL
Preferred
- Experience with Batch writing, Application performance, Caches security, Web Security
- Experience working in fintech, payments, or high-scale production environments
We are seeking an experienced & highly skilled Java Lead to join our team. The ideal candidate will have a strong background in both front end & Back-end Technologies with expertise in JAVA, and Spring.
As a Lead, you will be responsible for overseeing the development
team, architecture saleable application & ensuring best practices in software development. This role requires a hands on leader with excellent problem solving abilities & a passion for mentoring junior
team members.
Responsibilities
- Lead & mentor a team of developers proving guidance on coding standards, architecture & best
- practices
- Architect, design & develop ent to end JAVA based web applications & ensure high performance,
- security & scalability
- Work closely with cross functional teams, including product managers, designers & other developers
- to ensure alignment on project requirements & deliverable.
- Conduct code reviews & provide constructive feedback to team members to improve code quality &
- maintain a consistent codebase
- Participate in Agile/Scrum Ceremonies such as stand ups, sprint planning & retrospectives to
- contribute to the development process.
- Troubleshoot & resolve complex technical issues & ensure timely resolution of bugs & improvements.
- Stay up to date with emerging technologies & industry trends recommending & implementing
- improvements to keep our stack modern & effective
Skills & Requirements
- Minimum 8 Years of experience in Java development, with at least 2 years in Lead developer role.
- Back-end - Core JAVA/JAVA8, Spring-boot, Spring Framework, Micro-services, Rest API’s, Kafka,
- Database - MySQL
- Must be working in the fintech/ Payments domain
Preferred
- Experience with Batch writing, Application performance, Caches security, Web Security
The Opportunity:
As a Technical Support Consultant, you will play a significant role in Performio providing world
class support to our customers. With our tried and tested onboarding process, you will soon
become familiar with the Performio product and company.
You will draw on previous support experience to monitor for new support requests in
Zendesk, provide initial triage with 1st and 2nd level support, ensuring the customer is kept up
to date and the request is completed within a timely manner.
You will collaborate with other teams to ensure more complex requests are managed
efficiently and will provide feedback to help improve product and solution knowledge as well
as processes.
Answers to questions asked by customers that are not in the knowledge base will be
reviewed and added to the knowledge base if appropriate. We’re looking for someone who
thinks ahead, recognising opportunities to help customers help themselves.
You will help out with configuration changes and testing, furthering your knowledge and
experience of Performio. You may also be expected to help out with Managed Service,
Implementation and Work Order related tasks from time to time.
About Performio:
Performio is the last ICM software you’ll ever need. It allows you to manage incentive
compensation complexity and change over the long run by combining a structured plan
builder and flexible data management, with a partner who will make you a customer for life.
Our people are highly-motivated and engaged professionals with a clear set of values and
behaviors. We prove these values matter to us by living them each day. This makes Performio
both a great place to work and a great company to do business with.
But a great team alone is not sufficient to win. We have solved the fundamental issue
widespread in our industry—overly-rigid applications that cannot adapt to your needs, or
overly-flexible ones that become impossible to maintain over time. Only Performio allows you
to manage incentive compensation complexity and change over the long run by combining a
structured plan builder and flexible data management. The component-based plan builder
makes it easier to understand, change, and self-manage than traditional formula or
rules-based solutions. Our ability to Import data from any source, in any format, and perform
in-app data transformations, eliminate the pain of external processing and provides
end-to-end data visibility. The combination of these two functions, allows us to deliver more
powerful reporting and insights. And while every vendor says they are a partner, we truly are
one. We not only get your implementation right the first time, we enable you and give you the
autonomy and control to make changes year after year. And unlike most, we support every
part of your unique configuration. Performio is a partner that will make you a customer for life.
We have a global customer base across Australia, Asia, Europe, and the US in 25+ industries
that includes many well-known companies like Toll Brothers, Abbott Labs, News Corp,
Johnson & Johnson, Nikon, and Uber Freight.
What will you be doing:
● Monitoring and triaging new Support requests submitted by customers using our
Zendesk Support Portal
● Providing 1st and 2nd line support for Support requests
● Investigate, reproduce and resolve Customer issues within the required Service Level
Agreements
● Maintain our evolving knowledge base
● Clear and concise documentation of root causes and resolution
● Assist with the implementation and testing of Change Requests and implementation
projects
● As your knowledge of the product grows, make recommendations for solutions based
on client’s requests
● Assist in educating our client's compensation administrators applying best practices
What we’re looking for:
● Passion for customer service with a communication style that can be adapted to suit
the audience
● A problem solver with a range of troubleshooting methodologies
● Experience in the Sales Compensation industry
● Familiar with basic database concepts, spreadsheets and experienced in working with
large datasets (Excel, Relational Database Tables, SQL, ETL or other types of
tools/languages)
● 4+ years of experience in a similar role (experience with ICM software preferred)
● Experience with implementation & support of ICM solutions like SAP Commissions,
Varicent, Xactly will be a big plus
● Positive Attitude - optimistic, cares deeply about company and customers
● High Emotional IQ - shows empathy, listens when appropriate, creates healthy
conversation dynamic
● Resourceful - has a "I'll figure it out" attitude if something they need doesn't exist
Role Overview:
We are looking for a detail-oriented Quality Assurance (QA) Tester who is
passionate about delivering high-quality consumer-facing applications. This role
involves manual testing with exposure to automation, API testing, databases, and
mobile/web platforms, while working closely with engineering and product teams
across the SDLC.
Products:
• Openly – A conversation-first social app focused on meaningful interactions.
• Playroom – Voicechat – A real-time voice chat platform for live community
engagement.
• FriendChat – A chatroom-based social app for discovering and connecting with
new people.
Key Responsibilities:
• Perform manual testing for Android, web, and native applications.
• Create and execute detailed test scenarios, test cases, and test plans.
• Conduct REST API testing using Postman.
• Validate data using SQL and MongoDB.
• Identify, report, and track defects with clear reproduction steps.
• Support basic automation testing using Selenium (Java) and Appium.
• Perform regression, smoke, sanity, and exploratory testing.
• Conduct risk analysis and highlight quality risks early in the SDLC.• Collaborate closely with developers and product teams for defect resolution.
• Participate in CI/CD pipelines and support automated test executions.
• Use ADB tools for Android testing across devices and environments.
Required Skills & Technical Expertise:
• Strong knowledge of Manual Testing fundamentals.
• Hands-on experience with Postman and REST APIs.
• Working knowledge of SQL and MongoDB.
• Ability to design effective test scenarios.
• Basic understanding of Automation Testing concepts.
• Familiarity with SDLC and QA methodologies.
• Exposure to Selenium with Java and Appium.
• Understanding of Android, web, and native application testing.
• Experience using proxy tools for debugging and network inspection.
Good to Have:
• Exposure to CI/CD tools and pipelines.
• Hands-on experience with Appium, K6, Kafka, and proxy tools.
• Basic understanding of performance and load testing.
• Awareness of risk-based testing strategies.
Key Traits:
• High attention to detail and quality.
• Strong analytical and problem-solving skills.
• Clear communication and collaboration abilities.
• Eagerness to learn and grow in automation and advanced testing tools.
We are looking for a skilled Data Engineer / Data Warehouse Engineer to design, develop, and maintain scalable data pipelines and enterprise data warehouse solutions. The role involves close collaboration with business stakeholders and BI teams to deliver high-quality data for analytics and reporting.
Key Responsibilities
- Collaborate with business users and stakeholders to understand business processes and data requirements
- Design and implement dimensional data models, including fact and dimension tables
- Identify, design, and implement data transformation and cleansing logic
- Build and maintain scalable, reliable, and high-performance ETL/ELT pipelines
- Extract, transform, and load data from multiple source systems into the Enterprise Data Warehouse
- Develop conceptual, logical, and physical data models, including metadata, data lineage, and technical definitions
- Design, develop, and maintain ETL workflows and mappings using appropriate data load techniques
- Provide high-level design, research, and effort estimates for data integration initiatives
- Provide production support for ETL processes to ensure data availability and SLA adherence
- Analyze and resolve data pipeline and performance issues
- Partner with BI teams to design and develop reports and dashboards while ensuring data integrity and quality
- Translate business requirements into well-defined technical data specifications
- Work with data from ERP, CRM, HRIS, and other transactional systems for analytics and reporting
- Define and document BI usage through use cases, prototypes, testing, and deployment
- Support and enhance data governance and data quality processes
- Identify trends, patterns, anomalies, and data quality issues, and recommend improvements
- Train and support business users, IT analysts, and developers
- Lead and collaborate with teams spread across multiple locations
Required Skills & Qualifications
- Bachelor’s degree in Computer Science or a related field, or equivalent work experience
- 3+ years of experience in Data Warehousing, Data Engineering, or Data Integration
- Strong expertise in data warehousing concepts, tools, and best practices
- Excellent SQL skills
- Strong knowledge of relational databases such as SQL Server, PostgreSQL, and MySQL
- Hands-on experience with Google Cloud Platform (GCP) services, including:
- BigQuery
- Cloud SQL
- Cloud Composer (Airflow)
- Dataflow
- Dataproc
- Cloud Functions
- Google Cloud Storage (GCS)
- Experience with Informatica PowerExchange for Mainframe, Salesforce, and modern data sources
- Strong experience integrating data using APIs, XML, JSON, and similar formats
- In-depth understanding of OLAP, ETL frameworks, Data Warehousing, and Data Lakes
- Solid understanding of SDLC, Agile, and Scrum methodologies
- Strong problem-solving, multitasking, and organizational skills
- Experience handling large-scale datasets and database design
- Strong verbal and written communication skills
- Experience leading teams across multiple locations
Good to Have
- Experience with SSRS and SSIS
- Exposure to AWS and/or Azure cloud platforms
- Experience working with enterprise BI and analytics tools
Why Join Us
- Opportunity to work on large-scale, enterprise data platforms
- Exposure to modern cloud-native data engineering technologies
- Collaborative environment with strong stakeholder interaction
- Career growth and leadership opportunities
About the Company
SimplyFI Softech India Pvt. Ltd. is a product-led company working across AI, Blockchain, and Cloud. The team builds intelligent platforms for fintech, SaaS, and enterprise use cases, focused on solving real business problems with production-grade systems.
Role Overview
This role is for someone who enjoys working hands-on with data and machine learning models. You’ll support real-world AI use cases end to end, from data prep to model integration, while learning how AI systems are built and deployed in production.
Key Responsibilities
- Design, develop, and deploy machine learning models with guidance from senior engineers
- Work with structured and unstructured datasets for cleaning, preprocessing, and feature engineering
- Implement ML algorithms using Python and standard ML libraries
- Train, test, and evaluate models and track performance metrics
- Assist in integrating AI/ML models into applications and APIs
- Perform basic data analysis and visualization to extract insights
- Participate in code reviews, documentation, and team discussions
- Stay updated on ML, AI, and Generative AI trends
Required Skills & Qualifications
- Bachelor’s degree in Computer Science, AI, Data Science, or a related field
- Strong foundation in Python
- Clear understanding of core ML concepts: supervised and unsupervised learning
- Hands-on exposure to NumPy, Pandas, and Scikit-learn
- Basic familiarity with TensorFlow or PyTorch
- Understanding of data structures, algorithms, and statistics
- Good analytical thinking and problem-solving skills
- Comfortable working in a fast-moving product environment
Good to Have
- Exposure to NLP, Computer Vision, or Generative AI
- Experience with Jupyter Notebook or Google Colab
- Basic knowledge of SQL or NoSQL databases
- Understanding of REST APIs and model deployment concepts
- Familiarity with Git/GitHub
- AI/ML internships or academic projects
About Upsurge Labs
We're building the infrastructure and products that will shape how human civilization operates in the coming decades. The specifics evolve—the ambition doesn't.
The Role
The way software gets built is undergoing a fundamental shift. AI can now write, test, debug, and ship production-grade systems across web, mobile, embedded, robotics, and infrastructure. The bottleneck is no longer typing code—it's knowing what to build, why, and how the pieces fit together.
We're hiring Systems Engineers: people who can navigate an entire development cycle—from problem definition to production deployment—by directing AI tools and reasoning from first principles. You won't specialize in one stack. You'll operate across all of them.
This role replaces traditional dev teams. You'll work largely autonomously, shipping complete systems that previously required 3-5 specialists.
What You'll Do
- Own entire products and systems end-to-end: architecture, implementation, deployment, iteration
- Work across domains as needed—backend services, frontend interfaces, mobile apps, data pipelines, DevOps, embedded software, robotic systems
- Use AI tools to write, review, test, and debug code at high velocity
- Identify when AI output is wrong, incomplete, or subtly broken—and know how to fix it or when to escalate
- Make architectural decisions: database selection, protocol choices, system boundaries, performance tradeoffs
- Collaborate directly with designers, domain experts, and leadership
- Ship. Constantly.
What You Bring
First-principles thinking
You understand how systems work at a foundational level. When something breaks, you reason backward from the error to potential causes. You know the difference between a network timeout, a malformed query, a race condition, and a misconfigured environment—even if you haven't memorized the fix.
Broad technical fluency
You don't need to be an expert in everything. But you need working knowledge across:
- How web systems work: HTTP, DNS, TLS, REST, WebSockets, authentication flows
- How databases work: relational vs document vs key-value, indexing, query structure, transactions
- How infrastructure works: containers, orchestration, CI/CD, cloud primitives, networking basics
- How frontend works: rendering, state management, browser APIs, responsive design
- How mobile works: native vs cross-platform tradeoffs, app lifecycle, permissions
- How embedded/robotics software works: real-time constraints, sensor integration, communication protocols
You should be able to read code in any mainstream language and understand what it's doing.
AI-native workflow
You've already built real things using AI tools. You know how to prompt effectively, how to structure problems so AI can help, how to validate AI output, and when to step in manually.
High agency
You don't wait for permission or detailed specs. You figure out what needs to happen and make it happen. Ambiguity doesn't paralyze you.
Proof of work
Show us what you've built. Live products, GitHub repos, side projects, internal tools—anything that demonstrates you can ship complete systems.
What We Don't Care About
- Degrees or formal credentials
- Years of experience in a specific language or framework
- Whether you came from a "traditional" engineering path
What You'll Get
- Direct line to the CEO
- Autonomy to own large problem spaces
- A front-row seat to how engineering work is evolving
- Colleagues who ship fast and think clearly

Full‑Stack Engineer (Python/Django & Next.js)
Location: Bangalore
Experience: 2–8 years of hands‑on full‑stack development
We’re looking for a passionate Full‑Stack Engineer to join our team and help build secure, scalable systems that power exceptional customer experiences.
Key Skills -
• Architect and develop secure, scalable applications
• Collaborate closely with product & design teams
• Manage CI/CD pipelines and deployments
• Mentor engineers and enforce coding best practices
What we’re looking for:
• Strong expertise in Python/Django & Next.js/React
• Hands‑on with PostgreSQL, Docker, AWS/GCP
• Experience leading engineering teams
• Excellent problem‑solving & communication skills
If you’re excited about building impactful products and driving engineering excellence. Apply now !!
The Opportunity
Planview is looking for a passionate Sr Data Scientist to join our team tasked with developing innovative tools for connected work. You are an experienced expert in supporting enterprise
applications using Data Analytics, Machine Learning, and Generative AI.
You will use this experience to lead other data scientists and data engineers. You will also effectively engage with product teams to specify, validate, prototype, scale, and deploy features with a consistent customer experience across the Planview product suite.
Responsibilities (What you'll do)
- Enable Data Science features within Planview applications by working in a fast-paced start-up mindset.
- Collaborate closely with product management to enable Data Science features that deliver significant value to customers, ensuring that these features are optimized for operational efficiency.
- Manage every stage of the AI/ML development lifecycle, from initial concept through deployment in a production environment.
- Provide leadership to other Data Scientists by exemplifying exceptional quality in work, nurturing a culture of continuous learning, and offering daily guidance in their research endeavors.
- Effectively communicate ideas drawn from complex data with clarity and insight.
Qualifications (What you'll bring)
- Master’s in operations research, Statistics, Computer Science, Data Science, or related field.
- 8+ years of experience as a data scientist, data engineer, or ML engineer.
- Demonstrable history for bringing Data Science features to Enterprise applications.
- Exceptional Python and SQL coding skills.
- Experience with Optimization, Machine Learning, Generative AI, NLP, Statistics, and Simulation.
- Experience with AWS Data and ML Technologies (Sagemaker, Glue, Athena, Redshift)
Preferred qualifications:
- Experience working with datasets in the domains of project management, software development, and resource planning.
- Experience with common libraries and frameworks in data science (Scikit Learn, TensorFlow, PyTorch).
- Experience with ML platform tools (AWS SageMaker).
- Skilled at working as part of a global, diverse workforce of high-performing individuals.
- AWS Certification is a plus
We are seeking an experienced and highly skilled Java (Fullstack) Engineer to join our team.
The ideal candidate will have a strong background in both Back-end JAVA, Spring-boot, Spring Framework & Frontend Javascript, React or Angular with ability to salable high performance applications.
Responsibilities
- Develop, test, and deploy scalable and robust backend services Develop, test & deploy scalable & robust back-end services using JAVA & Spring-boot
- Build responsive & user friendly front-end applications using modern Java-script framework with React
- or Angular
- Collaborate with architects & team members to design salable, maintainable & efficient systems.
- Contribute to architectural decisions for micro-services, API’s & cloud solutions.
- Implement & maintain Restful API for seamless integration.
- Write clean, efficient & res-usable code adhering to best practices
- Conduct code reviews, performance optimizations & debugging
- Work with cross functional teams, including UX/UI designers, product managers & QA team.
- Mentor junior developers & provide technical guidance.
Skills & Requirements
- Minimum 5 Years of experience in backend/ fullstack development
- Back-end - Core JAVA/JAVA8, Spring-boot, Spring Framework, Micro-services, Rest API’s, Kafka,
- Front-end - JavaScript, HTML, CSS, Typescript, Angular
- Database - MySQL
Preferred
- Experience with Batch writing, Application performance, Caches security, Web Security
- Experience working in fintech, payments, or high-scale production environments
Full Stack Developer
Company: Jupsoft Technologies Pvt. Ltd.
Experience: 2 Years
Salary: ₹30,000 – ₹40,000 per month
Location: Noida
Job Type: Full-Time
Job description:-
Key Responsibilities
- Design, develop, and maintain scalable, production-ready web applications using Next.js (frontend) and Django + Django REST Framework (backend).
- Build, document, and integrate RESTful APIs to enable seamless communication between services.
- Work with SQL-based databases (e.g., MySQL/PostgreSQL/SQL Server) for schema design, optimization, indexing, and performance tuning.
- Implement multi-tenant database architecture to support scalable, secure multi-tenant applications.
- Ensure applications are secure, optimized, and user-friendly with proper implementation of SSR, authentication, authorization, and session management.
- Utilize Tailwind CSS & Shadcn UI for building modern, reusable UI components.
- Integrate and work with Docker for containerization and development workflows.
- Work with Gemini, OpenAPI specifications for API implementation and documentation.
- Build CI/CD pipelines using GitHub Actions and collaborate with DevOps for smooth deployments.
- Manage end-to-end product lifecycle — development, testing, deployment, monitoring, and optimization.
- Troubleshoot, debug, and optimize application performance and reliability.
- Maintain high-quality technical documentation, code readability, and system design clarity.
- Collaborate closely with UI/UX, QA, and product teams to ensure smooth delivery.
Required Skills & Qualifications
- Strong hands-on experience with Next.js & React ecosystem.
- Strong backend experience using Django + Django REST Framework (DRF).
- Strong understanding of SQL database systems and query optimization.
- Solid experience working with Docker for production-grade apps.
- Strong proficiency with Tailwind CSS and UI component libraries, especially Shadcn UI.
- Experience with GitHub Actions for CI/CD implementation.
- Strong understanding of REST APIs, OpenAPI, authentication, session management, and state management.
- Experience developing multi-tenant systems.
- Experience making applications production-ready and deploying end-to-end.
- Proficiency with HTML5, CSS3, JavaScript (ES6+), TypeScript.
- Familiarity with version control (Git/GitHub).
- Strong problem-solving, debugging, and analytical skills.
- Ability to write clean, maintainable, scalable code following best practices.
Nice to Have
- Experience with cloud services (AWS / GCP / Azure).
- Experience with WebSockets for real-time communication.
- Basic understanding of DevOps pipelines and monitoring tools.
Additional Attributes
- Strong communication skills and ability to collaborate across teams.
- Passion for learning new technologies and delivering high-quality products.
- Ability to work independently and manage timelines in a fast-paced environment.
Benefits:
- Health insurance
- Paid sick time
- Provident Fund
Work Location: In person
Job Type: Full-time
Benefits:
- Health insurance
- Paid sick time
- Provident Fund
Work Location: In person
Job Role: Profile Data Setup Analyst
Job Title: Data Analyst
Location: Vadodara | Department: Customer Service |
Experience: 3 - 5 Years
Job Overview
A company, we’ve been transforming the window and door industry with intelligent
software for over 40 years. Our solutions power manufacturers, dealers, and installers globally,
enabling efficiency, accuracy, and growth. We are now looking for curious, data-driven professionals
to join our mission of delivering world-class digital solutions to our customers.
Job Overview
As a Profile Data Setup Analyst, you will play a key role in configuring, analysing, and managing product
data for our customers. You will work closely with internal teams and clients to ensure accurate,
optimized, and timely data setup . This role is perfect for someone who
enjoys problem-solving, working with data, and continuously learning.
Key Responsibilities
• Understand customer product configurations and translate them into structured data using
Windowmaker Software.
• Set up and modify profile data including reinforcements, glazing, and accessories, aligned with
customer-specific rules and industry practices.
• Analyse data, identify inconsistencies, and ensure high-quality output that supports accurate
quoting and manufacturing.
• Collaborate with cross-functional teams (Sales, Software Development, Support) to deliver
complete and tested data setups on time.
• Provide training, guidance, and documentation to internal teams and customers as needed.
• Continuously look for process improvements and contribute to knowledge-sharing across the
team.
• Support escalated customer cases related to data accuracy or configuration issues.
• Ensure timely delivery of all assigned tasks while maintaining high standards of quality and
attention to detail.
Required Qualifications
• 3–5 years of experience in a data-centric role.
• Bachelor’s degree in engineering e.g Computer Science, or a related technical field.
• Experience with product data structures and product lifecycle.
• Strong analytical skills with a keen eye for data accuracy and patterns.
• Ability to break down complex product information into structured data elements.
• Eagerness to learn industry domain knowledge and software capabilities.
• Hands-on experience with Excel, SQL, or other data tools.
• Ability to manage priorities and meet deadlines in a fast-paced environment.
• Excellent written and verbal communication skills.
• A collaborative, growth-oriented mindset.
Nice to Have
• Prior exposure to ERP/CPQ/Manufacturing systems is a plus.
• Knowledge of the window and door (fenestration) industry is an added advantage.
Why Join Us
• Be part of a global product company with a solid industry reputation.
• Work on impactful projects that directly influence customer success.
• Collaborate with a talented, friendly, and supportive team.
• Learn, grow, and make a difference in the digital transformation of the fenestration industry.
- Design and implement integration solutions using iPaaS tools.
- Collaborate with customers, product, engineering and business stakeholders to translate business requirements into robust and scalable integration processes.
- Develop and maintain SQL queries and scripts to facilitate data manipulation and integration.
- Utilize RESTful API design and consumption to ensure seamless data flow between various systems and applications.
- Lead the configuration, deployment, and ongoing management of integration projects.
- Troubleshoot and resolve technical issues related to integration solutions.
- Document integration processes and create user guides for internal and external users.
- Stay current with the latest developments in iPaaS technologies and best practices
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- Minimum of 3 years’ experience in an integration engineering role with hands-on experience in an iPaaS tool, preferably Boomi.
- Proficiency in SQL and experience with database management and data integration patterns. - Strong understanding of integration patterns and solutions, API design, and cloud-based technologies.
- Good understanding of RESTful APIs and integration.
- Excellent problem-solving and analytical skills.
- Strong communication and interpersonal skills, with the ability to work effectively in a team environment.
- Experience with various integration protocols (REST, SOAP, FTP, etc.) and data formats (JSON, XML, etc.).
Preferred Skills:
- Boomi (or other iPaaS) certifications
- Experience with Intapp's Integration Builder is highly desirable but not mandatory.
- Certifications in Boomi or similar integration platforms.
- Experience with cloud services like MS Azure.
- Knowledge of additional programming languages (e.g., .NET, Java) is advantageous.
What we offer:
- Competitive salary and benefits package.
- Dynamic and innovative work environment.
- Opportunities for professional growth and advancement.
About Hudson Data
At Hudson Data, we view AI as both an art and a science. Our cross-functional teams — spanning business leaders, data scientists, and engineers — blend AI/ML and Big Data technologies to solve real-world business challenges. We harness predictive analytics to uncover new revenue opportunities, optimize operational efficiency, and enable data-driven transformation for our clients.
Beyond traditional AI/ML consulting, we actively collaborate with academic and industry partners to stay at the forefront of innovation. Alongside delivering projects for Fortune 500 clients, we also develop proprietary AI/ML products addressing diverse industry challenges.
Headquartered in New Delhi, India, with an office in New York, USA, Hudson Data operates globally, driving excellence in data science, analytics, and artificial intelligence.
⸻
About the Role
We are seeking a Data Analyst & Modeling Specialist with a passion for leveraging AI, machine learning, and cloud analytics to improve business processes, enhance decision-making, and drive innovation. You’ll play a key role in transforming raw data into insights, building predictive models, and delivering data-driven strategies that have real business impact.
⸻
Key Responsibilities
1. Data Collection & Management
• Gather and integrate data from multiple sources including databases, APIs, spreadsheets, and cloud warehouses.
• Design and maintain ETL pipelines ensuring data accuracy, scalability, and availability.
• Utilize any major cloud platform (Google Cloud, AWS, or Azure) for data storage, processing, and analytics workflows.
• Collaborate with engineering teams to define data governance, lineage, and security standards.
2. Data Cleaning & Preprocessing
• Clean, transform, and organize large datasets using Python (pandas, NumPy) and SQL.
• Handle missing data, duplicates, and outliers while ensuring consistency and quality.
• Automate data preparation using Linux scripting, Airflow, or cloud-native schedulers.
3. Data Analysis & Insights
• Perform exploratory data analysis (EDA) to identify key trends, correlations, and drivers.
• Apply statistical techniques such as regression, time-series analysis, and hypothesis testing.
• Use Excel (including pivot tables) and BI tools (Tableau, Power BI, Looker, or Google Data Studio) to develop insightful reports and dashboards.
• Present findings and recommendations to cross-functional stakeholders in a clear and actionable manner.
4. Predictive Modeling & Machine Learning
• Build and optimize predictive and classification models using scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, and H2O.ai.
• Perform feature engineering, model tuning, and cross-validation for performance optimization.
• Deploy and manage ML models using Vertex AI (GCP), AWS SageMaker, or Azure ML Studio.
• Continuously monitor, evaluate, and retrain models to ensure business relevance.
5. Reporting & Visualization
• Develop interactive dashboards and automated reports for performance tracking.
• Use pivot tables, KPIs, and data visualizations to simplify complex analytical findings.
• Communicate insights effectively through clear data storytelling.
6. Collaboration & Communication
• Partner with business, engineering, and product teams to define analytical goals and success metrics.
• Translate complex data and model results into actionable insights for decision-makers.
• Advocate for data-driven culture and support data literacy across teams.
7. Continuous Improvement & Innovation
• Stay current with emerging trends in AI, ML, data visualization, and cloud technologies.
• Identify opportunities for process optimization, automation, and innovation.
• Contribute to internal R&D and AI product development initiatives.
⸻
Required Skills & Qualifications
Technical Skills
• Programming: Proficient in Python (pandas, NumPy, scikit-learn, XGBoost, LightGBM, TensorFlow, Keras, H2O.ai).
• Databases & Querying: Advanced SQL skills; experience with BigQuery, Redshift, or Azure Synapse is a plus.
• Cloud Expertise: Hands-on experience with one or more major platforms — Google Cloud, AWS, or Azure.
• Visualization & Reporting: Skilled in Tableau, Power BI, Looker, or Excel (pivot tables, data modeling).
• Data Engineering: Familiarity with ETL tools (Airflow, dbt, or similar).
• Operating Systems: Strong proficiency with Linux/Unix for scripting and automation.
Soft Skills
• Strong analytical, problem-solving, and critical-thinking abilities.
• Excellent communication and presentation skills, including data storytelling.
• Curiosity and creativity in exploring and interpreting data.
• Collaborative mindset, capable of working in cross-functional and fast-paced environments.
⸻
Education & Certifications
• Bachelor’s degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
• Master’s degree in Data Analytics, Machine Learning, or Business Intelligence preferred.
• Relevant certifications are highly valued:
• Google Cloud Professional Data Engineer
• AWS Certified Data Analytics – Specialty
• Microsoft Certified: Azure Data Scientist Associate
• TensorFlow Developer Certificate
⸻
Why Join Hudson Data
At Hudson Data, you’ll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools — from AI and ML frameworks to cloud-based analytics platforms — to solve meaningful problems. You’ll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.
We are seeking a highly skilled and experienced Python Developer with a strong background in fintech to join our dynamic team. The ideal candidate will have at least 7+ years of professional experience in Python development, with a proven track record of delivering high-quality software solutions in the fintech industry.
Responsibilities:
Design, build, and maintain RESTful APIs using Django and Django Rest Framework.
Integrate AI/ML models into existing applications to enhance functionality and provide data-driven insights.
Collaborate with cross-functional teams, including product managers, designers, and other developers, to define and implement new features and functionalities.
Manage deployment processes, ensuring smooth and efficient delivery of applications.
Implement and maintain payment gateway solutions to facilitate secure transactions.
Conduct code reviews, provide constructive feedback, and mentor junior members of the development team.
Stay up-to-date with emerging technologies and industry trends, and evaluate their potential impact on our products and services.
Maintain clear and comprehensive documentation for all development processes and integrations.
Requirements:
Proficiency in Python and Django/Django Rest Framework.
Experience with REST API development and integration.
Knowledge of AI/ML concepts and practical experience integrating AI/ML models.
Hands-on experience with deployment tools and processes.
Familiarity with payment gateway integration and management.
Strong understanding of database systems (SQL, PostgreSQL, MySQL).
Experience with version control systems (Git).
Strong problem-solving skills and attention to detail.
Excellent communication and teamwork skills.
Job Types: Full-time, Permanent
Work Location: In person
AccioJob is conducting a Walk-In Hiring Drive with Xebo.ai for the position of Software Engineer.
To apply, register and select your slot here: https://go.acciojob.com/SMPPbd
Required Skills: DSA, SQL, OOPS, JavaScript, React
Eligibility:
Degree: BTech./BE
Branch: Computer Science/CSE/Other CS related branch, IT
Graduation Year: 2025, 2026
Work Details:
Work Location: Noida (Onsite)
CTC: ₹6 LPA to ₹7 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Noida Centre
Further Rounds (for shortlisted candidates only):
Resume Shortlist, Technical Interview 1, Technical Interview 2, Technical Interview 3, HR Discussion
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/SMPPbd
👇 FAST SLOT BOOKING 👇
[ 📲 DOWNLOAD ACCIOJOB APP ]
About Us
MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.
We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.
About the Team
As a Lead Data Specialist at MIC Global, you will play a key role in transforming data into actionable insights that inform strategic and operational decisions. You will work closely with Product, Engineering, and Business teams to analyze trends, build dashboards, and ensure that data pipelines and reporting structures are accurate, automated, and scalable.
This is a hands-on, analytical, and technically focused role ideal for someone experienced in data analytics and engineering practices. You will use SQL, Python, and modern BI tools to interpret large datasets, support pricing models, and help shape the data-driven culture across MIC Global
Key Roles and Responsibilities
Data Analytics & Insights
- Analyze complex datasets to identify trends, patterns, and insights that support business and product decisions.
- Partner with Product, Operations, and Finance teams to generate actionable intelligence on customer behavior, product performance, and risk modeling.
- Contribute to the development of pricing models, ensuring accuracy and commercial relevance.
- Deliver clear, concise data stories and visualizations that drive executive and operational understanding.
- Develop analytical toolkits for underwriting, pricing and claims
Data Engineering & Pipeline Management
- Design, implement, and maintain reliable data pipelines and ETL workflows.
- Write clean, efficient scripts in Python for data cleaning, transformation, and automation.
- Ensure data quality, integrity, and accessibility across multiple systems and environments.
- Work with Azure data services to store, process, and manage large datasets efficiently.
Business Intelligence & Reporting
- Develop, maintain, and optimize dashboards and reports using Power BI (or similar tools).
- Automate data refreshes and streamline reporting processes for cross-functional teams.
- Track and communicate key business metrics, providing proactive recommendations.
Collaboration & Innovation
- Collaborate with engineers, product managers, and business leads to align analytical outputs with company goals.
- Support the adoption of modern data tools and agentic AI frameworks to improve insight generation and automation.
- Continuously identify opportunities to enhance data-driven decision-making across the organization.
Ideal Candidate Profile
- 10+ years of relevant experience in data analysis or business intelligence, ideally
- within product-based SaaS, fintech, or insurance environments.
- Proven expertise in SQL for data querying, manipulation, and optimization.
- Hands-on experience with Python for data analytics, automation, and scripting.
- Strong proficiency in Power BI, Tableau, or equivalent BI tools.
- Experience working in Azure or other cloud-based data ecosystems.
- Solid understanding of data modeling, ETL processes, and data governance.
- Ability to translate business questions into technical analysis and communicate findings effectively.
Preferred Attributes
- Experience in insurance or fintech environments, especially operations, and claims analytics.
- Exposure to agentic AI and modern data stack tools (e.g., dbt, Snowflake, Databricks).
- Strong attention to detail, analytical curiosity, and business acumen.
- Collaborative mindset with a passion for driving measurable impact through data.
Benefits
- 33 days of paid holiday
- Competitive compensation well above market average
- Work in a high-growth, high-impact environment with passionate, talented peers
- Clear path for personal growth and leadership development
If interested please share your resume at ayushi.dwivedi at cloudsufi.com
Note - This role is remote but with quarterly visit to Noida office (1 week in a qarter) if you are ok for that then pls share your resume.
Data Engineer
Position Type: Full-time
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services, and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
Familiarity with similar large-scale public dataset integration initiatives.
Experience with multilingual data integration.
If interested please send your resume at ayushi.dwivedi at cloudsufi.com
Current location of candidate must be Bangalore (as client office visit is required), also candidate must be open for 1 week in a quarter visit to Noida office.
About Us
CLOUDSUFI, a Google Cloud Premier Partner, is a global leading provider of data-driven digital transformation across cloud-based enterprises. With a global presence and focus on Software & Platforms, Life sciences and Healthcare, Retail, CPG, financial services and supply chain, CLOUDSUFI is positioned to meet customers where they are in their data monetization journey.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, colour, religion, gender, gender identity or expression, sexual orientation and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/
Job Summary
We are seeking a highly skilled and motivated Data Engineer to join our Development POD for the Integration Project. The ideal candidate will be responsible for designing, building, and maintaining robust data pipelines to ingest, clean, transform, and integrate diverse public datasets into our knowledge graph. This role requires a strong understanding of Cloud Platform (GCP) services, data engineering best practices, and a commitment to data quality and scalability.
Key Responsibilities
ETL Development: Design, develop, and optimize data ingestion, cleaning, and transformation pipelines for various data sources (e.g., CSV, API, XLS, JSON, SDMX) using Cloud Platform services (Cloud Run, Dataflow) and Python.
Schema Mapping & Modeling: Work with LLM-based auto-schematization tools to map source data to our schema.org vocabulary, defining appropriate Statistical Variables (SVs) and generating MCF/TMCF files.
Entity Resolution & ID Generation: Implement processes for accurately matching new entities with existing IDs or generating unique, standardized IDs for new entities.
Knowledge Graph Integration: Integrate transformed data into the Knowledge Graph, ensuring proper versioning and adherence to existing standards.
API Development: Develop and enhance REST and SPARQL APIs via Apigee to enable efficient access to integrated data for internal and external stakeholders.
Data Validation & Quality Assurance: Implement comprehensive data validation and quality checks (statistical, schema, anomaly detection) to ensure data integrity, accuracy, and freshness. Troubleshoot and resolve data import errors.
Automation & Optimization: Collaborate with the Automation POD to leverage and integrate intelligent assets for data identification, profiling, cleaning, schema mapping, and validation, aiming for significant reduction in manual effort.
Collaboration: Work closely with cross-functional teams, including Managed Service POD, Automation POD, and relevant stakeholders.
Qualifications and Skills
Education: Bachelor's or Master's degree in Computer Science, Data Engineering, Information Technology, or a related quantitative field.
Experience: 3+ years of proven experience as a Data Engineer, with a strong portfolio of successfully implemented data pipelines.
Programming Languages: Proficiency in Python for data manipulation, scripting, and pipeline development.
Cloud Platforms and Tools: Expertise in Google Cloud Platform (GCP) services, including Cloud Storage, Cloud SQL, Cloud Run, Dataflow, Pub/Sub, BigQuery, and Apigee. Proficiency with Git-based version control.
Core Competencies:
Must Have - SQL, Python, BigQuery, (GCP DataFlow / Apache Beam), Google Cloud Storage (GCS)
Must Have - Proven ability in comprehensive data wrangling, cleaning, and transforming complex datasets from various formats (e.g., API, CSV, XLS, JSON)
Secondary Skills - SPARQL, Schema.org, Apigee, CI/CD (Cloud Build), GCP, Cloud Data Fusion, Data Modelling
Solid understanding of data modeling, schema design, and knowledge graph concepts (e.g., Schema.org, RDF, SPARQL, JSON-LD).
Experience with data validation techniques and tools.
Familiarity with CI/CD practices and the ability to work in an Agile framework.
Strong problem-solving skills and keen attention to detail.
Preferred Qualifications:
Experience with LLM-based tools or concepts for data automation (e.g., auto-schematization).
Familiarity with similar large-scale public dataset integration initiatives.
Experience with multilingual data integration.
About Us
MIC Global is a full-stack micro-insurance provider, purpose-built to design and deliver embedded parametric micro-insurance solutions to platform companies. Our mission is to make insurance more accessible for new, emerging, and underserved risks using our MiIncome loss-of-income products, MiConnect, MiIdentity, Coverpoint technology, and more — backed by innovative underwriting capabilities as a Lloyd’s Coverholder and through our in-house reinsurer, MicRe.
We operate across 12+ countries, with our Global Operations Center in Bangalore supporting clients worldwide, including a leading global ride-hailing platform and a top international property rental marketplace. Our distributed teams across the UK, USA, and Asia collaborate to ensure that no one is beyond the reach of financial security.
About the Team
We're seeking a mid-level Data Engineer with strong DBA experience to join our insurtech data analytics team. This role focuses on supporting various teams including infrastructure, reporting, and analytics. You'll be responsible for SQL performance optimization, building data pipelines, implementing data quality checks, and helping teams with database-related challenges. You'll work closely with the infrastructure team on production support, assist the reporting team with complex queries, and support the analytics team in building visualizations and dashboards.
Key Roles and Responsibilities
Database Administration & Optimization
- Support infrastructure team with production database issues and troubleshooting
- Debug and resolve SQL performance issues, identify bottlenecks, and optimize queries
- Optimize stored procedures, functions, and views for better performance
- Perform query tuning, index optimization, and execution plan analysis
- Design and develop complex stored procedures, functions, and views
- Support the reporting team with complex SQL queries and database design
Data Engineering & Pipelines
- Design and build ETL/ELT pipelines using Azure Data Factory and Python
- Implement data quality checks and validation rules before data enters pipelines
- Develop data integration solutions to connect various data sources and systems
- Create automated data validation, quality monitoring, and alerting mechanisms
- Develop Python scripts for data processing, transformation, and automation
- Build and maintain data models to support reporting and analytics requirements
Support & Collaboration
- Help data analytics team build visualizations and dashboards by providing data models and queries
- Support reporting team with data extraction, transformation, and complex reporting queries
- Collaborate with development teams to support application database requirements
- Provide technical guidance and best practices for database design and query optimization
Azure & Cloud
- Work with Azure services including Azure SQL Database, Azure Data Factory, Azure Storage, Azure Functions, and Azure ML
- Implement cloud-based data solutions following Azure best practices
- Support cloud database migrations and optimizations
- Work with Agentic AI concepts and tools to build intelligent data solutions
Ideal Candidate Profile
Essential
- 5-8 years of experience in data engineering and database administration
- Strong expertise in MS SQL Server (2016+) administration and development
- Proficient in writing complex SQL queries, stored procedures, functions, and views
- Hands-on experience with Microsoft Azure services (Azure SQL Database, Azure Data Factory, Azure Storage)
- Strong Python scripting skills for data processing and automation
- Experience with ETL/ELT design and implementation
- Knowledge of database performance tuning, query optimization, and indexing strategies
- Experience with SQL performance debugging tools (XEvents, Profiler, or similar)
- Understanding of data modeling and dimensional design concepts
- Knowledge of Agile methodology and experience working in Agile teams
- Strong problem-solving and analytical skills
- Understanding of Agentic AI concepts and tools
- Excellent communication skills and ability to work with cross-functional teams
Desirable
- Knowledge of insurance or financial services domain
- Experience with Azure ML and machine learning pipelines
- Experience with Azure DevOps and CI/CD pipelines
- Familiarity with data visualization tools (Power BI, Tableau)
- Experience with NoSQL databases (Cosmos DB, MongoDB)
- Knowledge of Spark, Databricks, or other big data technologies
- Azure certifications (Azure Data Engineer Associate, Azure Database Administrator Associate)
- Experience with version control systems (Git, Azure Repos)
Tech Stack
- MS SQL Server 2016+, Azure SQL Database, Azure Data Factory, Azure ML, Azure Storage, Azure Functions, Python, T-SQL, Stored Procedures, ETL/ELT, SQL Performance Tools (XEvents, Profiler), Agentic AI Tools, Azure DevOps, Power BI, Agile, Git
Benefits
- 33 days of paid holiday
- Competitive compensation well above market average
- Work in a high-growth, high-impact environment with passionate, talented peers
- Clear path for personal growth and leadership development
Job Title : Java Developer
Experience : 2 to 10 Years
Location : Pune (Must be currently in Pune)
Notice Period : Immediate to 15 Days (Serving NP acceptable)
Budget :
- 2 to 3.5 yrs → up to 13 LPA
- 3.5 to 5 yrs → up to 18 LPA
- 5+ yrs → up to 25 LPA
Mandatory Skills : Java 8/17, Spring Boot, REST APIs, Hibernate/JPA, SQL/RDBMS, OOPs, Design Patterns, Git/GitHub, Unit Testing, Microservices (Good Coding Skills Mandatory)
Role Overview :
Hiring multiple Java Developers to build scalable and performance-driven applications. Strong hands-on coding and problem-solving skills required.
Key Responsibilities :
- Develop and maintain Java-based applications & REST services
- Write clean, testable code with JUnit & unit tests
- Participate in code reviews, debugging & optimization
- Work with SQL databases, CI/CD & version control tools
- Collaborate with cross-functional teams in Agile setups
Good to Have :
- MongoDB, AWS, Docker, Jenkins/GitHub Actions, Prometheus, Grafana, Spring Actuators, Tomcat/JBoss

Position: Full Stack Developer ( PHP Codeigniter)
Company : Mayura Consultancy Services
Experience: 2 yrs
Location : Bangalore
Skill: HTML, CSS, Bootstrap, Javascript, Ajax, Jquery , PHP and Codeigniter or CI
Work Location: Work From Home(WFH)
Apply: Please apply for the job opening using the URL below, based on your skill set. Once you complete the application form, we will review your profile.
Website:
https://www.mayuraconsultancy.com/careers/mcs-full-stack-web-developer-opening?r=jlp
Requirements :
- Prior experience in Full Stack Development using PHP Codeigniter
Perks of Working with MCS :
- Contribute to Innovative Solutions: Join a dynamic team at the forefront of software development, contributing to innovative projects and shaping the technological solutions of the organization.
- Work with Clients from across the Globe: Collaborate with clients from around the world, gaining exposure to diverse cultures and industries, and contributing to the development of solutions that address the unique needs and challenges of global businesses.
- Complete Work From Home Opportunity: Enjoy the flexibility of working entirely from the comfort of your home, empowering you to manage your schedule and achieve a better work-life balance while coding innovative solutions for MCS.
- Opportunity to Work on Projects Developing from Scratch: Engage in projects from inception to completion, working on solutions developed from scratch and having the opportunity to make a significant impact on the design, architecture, and functionality of the final product.
- Diverse Projects: Be involved in a variety of development projects, including web applications, mobile apps, e-commerce platforms, and more, allowing you to showcase your versatility as a Full Stack Developer and expand your portfolio.
Joining MCS as a Full Stack Developer opens the door to a world where your technical skills can shine and grow, all while enjoying a supportive and dynamic work environment. We're not just building solutions; we're building the future—and you can be a key part of that journey.
Role Overview
We are looking for a Senior Marketing Analytics professional with strong experience in Marketing Mix Modeling (MMM), Attribution Modeling, and ROI analysis. The role involves working closely with marketing and business leadership to deliver actionable insights that optimize marketing spend and drive business growth.
Key Responsibilities
- Analyze large-scale marketing and customer datasets to deliver actionable business insights.
- Build and maintain Marketing Mix Models (MMM) to measure media effectiveness and optimize marketing investments.
- Design and implement attribution models (multi-touch, incrementality, lift analysis) to evaluate campaign performance.
- Perform ROI, CAC, ROAS, and funnel analysis across marketing channels.
- Write complex SQL queries to extract, combine, and analyze data from multiple sources.
- Use Python for statistical analysis, regression modeling, forecasting, and experimentation.
- Develop and publish Tableau dashboards and automated reports for leadership and stakeholders.
- Work with marketing platforms such as Google Analytics (GA4), Adobe Analytics, Salesforce Marketing Cloud, Marketo, or similar tools.
- Collaborate with cross-functional teams to define KPIs, reporting requirements, and analytics roadmaps.
- Present insights and recommendations clearly to senior leadership and non-technical stakeholders.
- Ensure data accuracy, consistency, and documentation of analytics methodologies.
Required Skills & Qualifications
- 8+ years of experience in analytics, with a strong focus on marketing or digital analytics.
- Hands-on expertise in Marketing Mix Modeling (MMM) and Attribution Modeling.
- Strong proficiency in SQL and Python for data analysis.
- Experience with Tableau for dashboarding and automated reporting.
- Working knowledge of Google Analytics / GA4, Adobe Analytics, and marketing automation or CRM tools.
- Strong understanding of data modeling, reporting, and ROI measurement.
- Excellent stakeholder management, communication, and data storytelling skills.
- Ability to work independently in a fast-paced and ambiguous environment.
Good to Have
- Experience with Power BI / Looker / BigQuery
- Exposure to A/B testing, experimentation, or econometric modeling
- Experience working with large marketing datasets and cloud platforms
Role Overview
We are looking for a Senior Marketing Analytics professional with strong experience in Marketing Mix Modeling (MMM), Attribution Modeling, and ROI analysis. The role involves working closely with marketing and business leadership to deliver actionable insights that optimize marketing spend and drive business growth.
Key Responsibilities
- Analyze large-scale marketing and customer datasets to deliver actionable business insights.
- Build and maintain Marketing Mix Models (MMM) to measure media effectiveness and optimize marketing investments.
- Design and implement attribution models (multi-touch, incrementality, lift analysis) to evaluate campaign performance.
- Perform ROI, CAC, ROAS, and funnel analysis across marketing channels.
- Write complex SQL queries to extract, combine, and analyze data from multiple sources.
- Use Python for statistical analysis, regression modeling, forecasting, and experimentation.
- Develop and publish Tableau dashboards and automated reports for leadership and stakeholders.
- Work with marketing platforms such as Google Analytics (GA4), Adobe Analytics, Salesforce Marketing Cloud, Marketo, or similar tools.
- Collaborate with cross-functional teams to define KPIs, reporting requirements, and analytics roadmaps.
- Present insights and recommendations clearly to senior leadership and non-technical stakeholders.
- Ensure data accuracy, consistency, and documentation of analytics methodologies.
Required Skills & Qualifications
- 8+ years of experience in analytics, with a strong focus on marketing or digital analytics.
- Hands-on expertise in Marketing Mix Modeling (MMM) and Attribution Modeling.
- Strong proficiency in SQL and Python for data analysis.
- Experience with Tableau for dashboarding and automated reporting.
- Working knowledge of Google Analytics / GA4, Adobe Analytics, and marketing automation or CRM tools.
- Strong understanding of data modeling, reporting, and ROI measurement.
- Excellent stakeholder management, communication, and data storytelling skills.
- Ability to work independently in a fast-paced and ambiguous environment.
Good to Have
- Experience with Power BI / Looker / BigQuery
- Exposure to A/B testing, experimentation, or econometric modeling
- Experience working with large marketing datasets and cloud platforms
Position Overview:
As a BI (Business Intelligence) Developer, they will be responsible for designing,
developing, and maintaining the business intelligence solutions that support data
analysis and reporting. They will collaborate with business stakeholders, analysts, and
data engineers to understand requirements and translate them into efficient and
effective BI solutions. Their role will involve working with various data sources,
designing data models, assisting ETL (Extract, Transform, Load) processes, and
developing interactive dashboards and reports.
Key Responsibilities:
1. Requirement Gathering: Collaborate with business stakeholders to understand
their data analysis and reporting needs. Translate these requirements into
technical specifications and develop appropriate BI solutions.
2. Data Modelling: Design and develop data models that effectively represent
the underlying business processes and facilitate data analysis and reporting.
Ensure data integrity, accuracy, and consistency within the data models.
3. Dashboard and Report Development: Design, develop, and deploy interactive
dashboards and reports using Sigma computing.
4. Data Integration: Integrate data from various systems and sources to provide a
comprehensive view of business performance. Ensure data consistency and
accuracy across different data sets.
5. Performance Optimization: Identify performance bottlenecks in BI solutions and
optimize query performance, data processing, and report rendering. Continuously
monitor and fine-tune the performance of BI applications.
6. Data Governance: Ensure compliance with data governance policies and
standards. Implement appropriate security measures to protect sensitive data.
7. Documentation and Training: Document the technical specifications, data
models, ETL processes, and BI solution configurations.
8. Ensuring that the proposed solutions meet business needs and requirements.
9. Should be able to create and own Business/ Functional Requirement
Documents.
10. Monitor or track project milestones and deliverables.
11. Submit project deliverables, ensuring adherence to quality standards.
Qualifications and Skills:
1. Master/ Bachelor’s degree in IT or relevant and having a minimum of 2-4 years of
experience in Business Analysis or a related field
2. Proven experience as a BI Developer or similar role.
3. Fundamental analytical and conceptual thinking skills with demonstrated skills in
managing projects on implementation of Platform Solutions
4. Excellent planning, organizational and time management skills.
5. Strong understanding of data warehousing concepts, dimensional modelling, and
ETL processes.
6. Proficiency in SQL, Snowflake for data extraction, manipulation, and analysis.
7. Experience with one or more BI tools such as Sigma computing.
8. Knowledge of data visualization best practices and ability to create compelling
data visualizations.
9. Solid problem-solving and analytical skills with a detail-oriented mindset.
10. Strong communication and interpersonal skills to collaborate effectively with
different stakeholders.
11. Ability to work independently and manage multiple priorities in a fast-paced
environment.
12. Knowledge of data governance principles and security best practices.
13. Candidate having experience in managing implementation project of platform
solutions to the U.S. clients would be preferable.
14. Exposure to U.S debt collection industry is a plus.

Global digital transformation solutions provider.
JOB DETAILS:
Job Role: Lead I - .Net Developer - .NET, Azure, Software Engineering
Industry: Global digital transformation solutions provider
Work Mode: Hybrid
Salary: Best in Industry
Experience: 6-8 years
Location: Hyderabad
Job Description:
• Experience in Microsoft Web development technologies such as Web API, SOAP XML
• C#/.NET .Netcore and ASP.NET Web application experience Cloud based development experience in AWS or Azure
• Knowledge of cloud architecture and technologies
• Support/Incident management experience in a 24/7 environment
• SQL Server and SSIS experience
• DevOps experience of Github and Jenkins CI/CD pipelines or similar
• Windows Server 2016/2019+ and SQL Server 2019+ experience
• Experience of the full software development lifecycle
• You will write clean, scalable code, with a view towards design patterns and security best practices
• Understanding of Agile methodologies working within the SCRUM framework AWS knowledge
Must-Haves
C#/.NET/.NET Core (experienced), ASP.NET Web application (experienced), SQL Server/SSIS (experienced), DevOps (Github/Jenkins CI/CD), Cloud architecture (AWS or Azure)
.NET (Senior level), Azure (Very good knowledge), Stakeholder Management (Good)
Mandatory skills: Net core with Azure or AWS experience
Notice period - 0 to 15 days only
Location: Hyderabad
Virtual Drive - 17th Jan
Review Criteria
- Strong Data / ETL Test Engineer
- 5+ years of overall experience in Testing/QA
- 3+ years of hands-on end-to-end data testing/ETL testing experience, covering data extraction, transformation, loading validation, reconciliation, working across BI / Analytics / Data Warehouse / e-Governance platforms
- Must have strong understanding and hands-on exposure to Data Warehouse concepts and processes, including fact & dimension tables, data models, data flows, aggregations, and historical data handling.
- Must have experience in Data Migration Testing, including validation of completeness, correctness, reconciliation, and post-migration verification from legacy platforms to upgraded/cloud-based data platforms.
- Must have independently handled test strategy, test planning, test case design, execution, defect management, and regression cycles for ETL and BI testing
- Hands-on experience with ETL tools and SQL-based data validation is mandatory (Working knowledge or hands-on exposure to Redshift and/or Qlik will be considered sufficient)
- Must hold a Bachelor’s degree B.E./B.Tech else should have master's in M.Tech/MCA/M.Sc/MS
- Must demonstrate strong verbal and written communication skills, with the ability to work closely with business stakeholders, data teams, and QA leadership
- Mandatory Location: Candidate must be based within Delhi NCR (100 km radius)
Preferred
- Relevant certifications such as ISTQB or Data Analytics / BI certifications (Power BI, Snowflake, AWS, etc.)
Job Specific Criteria
- CV Attachment is mandatory
- Do you have experience working on Government projects/companies, mention brief about project?
- Do you have experience working on enterprise projects/companies, mention brief about project?
- Please mention the names of 2 key projects you have worked on related to Data Warehouse / ETL / BI testing?
- Do you hold any ISTQB or Data / BI certifications (Power BI, Snowflake, AWS, etc.)?
- Do you have exposure to BI tools such as Qlik?
- Are you willing to relocate to Delhi and why (if not from Delhi)?
- Are you available for a face-to-face round?
Role & Responsibilities
- 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
- Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
- Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
- Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
- Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
- Experience of conducting test of migrated data and defining test scenarios and test cases for the same
- Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
- Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations
Ideal Candidate
- 5 years’ experience in Data Testing across BI Analytics platforms with at least 2 largescale enterprise Data Warehouse Analytics eGovernance programs
- Proficiency in ETL Data Warehouse and BI report dashboard validation including test planning data reconciliation acceptance criteria definition defect triage and regression cycle management for BI landscapes
- Proficient in analyzing business requirements and data mapping specifications BRDs Data Models Source to Target Mappings User Stories Reports Dashboards to define comprehensive test scenarios and test cases
- Ability to review high level and low-level data models ETL workflows API specifications and business logic implementations to design test strategies ensuring accuracy consistency and performance of data pipelines
- Ability to test and validate the migrated data from old platform to an upgraded platform and ensure the completeness and correctness of migration
- Experience of conducting test of migrated data and defining test scenarios and test cases for the same
- Experience with BI tools like Qlik ETL platforms Data Lake platforms Redshift to support end to end validation
- Exposure to Data Quality Metadata Management and Data Governance frameworks ensuring KPIs metrics and dashboards align with business expectations
What You’ll Do:
As a Sr. Data Scientist, you will work closely across DeepIntent Data Science teams located in New York, India, and Bosnia. The role will focus on building predictive models, implementing data-driven solutions to maximize ad effectiveness. You will also lead efforts in generating analyses and insights related to the measurement of campaign outcomes, Rx, patient journey, and supporting the evolution of the DeepIntent product suite. Activities in this position include developing and deploying models in production, reading campaign results, analyzing medical claims, clinical, demographic and clickstream data, performing analysis and creating actionable insights, summarizing, and presenting results and recommended actions to internal stakeholders and external clients, as needed.
- Explore ways to create better predictive models.
- Analyze medical claims, clinical, demographic and clickstream data to produce and present actionable insights.
- Explore ways of using inference, statistical, and machine learning techniques to improve the performance of existing algorithms and decision heuristics.
- Design and deploy new iterations of production-level code.
- Contribute posts to our upcoming technical blog.
Who You Are:
- Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, or Data Science.
- 5+ years of working experience as a Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, or other areas requiring customer-level predictive analytics.
- Advanced proficiency in performing statistical analysis in Python, including relevant libraries, is required.
- Experience working with data processing, transformation and building model pipelines using tools such as Spark, Airflow, and Docker.
- You have an understanding of the ad-tech ecosystem, digital marketing and advertising data and campaigns or familiarity with the US healthcare patient and provider systems (e.g. medical claims, medications).
- You have varied and hands-on predictive machine learning experience (deep learning, boosting algorithms, inference…).
- You are interested in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicating with less technical audiences orally and in writing.
- You can write production level code, work with Git repositories.
- Active Kaggle participant.
- Working experience with SQL.
- Familiar with medical and healthcare data (medical claims, Rx, preferred).
- Conversant with cloud technologies such as AWS or Google Cloud.
Company Description
AdElement is a leading digital advertising technology company that has been helping app publishers increase their ad revenue and reach untapped demand since 2011. With our expertise in connecting brands to app audiences on evolving screens, such as VR headsets and vehicle consoles, we enable our clients to be first to market. We have been recognized as the Google Agency of the Year and have offices globally, with our headquarters located in New Brunswick, New Jersey.
Job Description
Work alongside a highly skilled engineering team to design, develop, and maintain large-scale, highly performant, real-time applications.
Own building features, driving directly with product and other engineering teams.
Demonstrate excellent communication skills in working with technical and non-technical audiences.
Be an evangelist for best practices across all functions - developers, QA, and infrastructure/ops.
Be an evangelist for platform innovation and reuse.
Requirements:
2+ years of experience building large-scale and low-latency distributed systems.
Command of Java or C++.
Solid understanding of algorithms, data structures, performance optimization techniques, object-oriented programming, multi-threading, and real-time programming.
Experience with distributed caching, SQL/NO SQL, and other databases is a plus.
Experience with Big Data and cloud services such as AWS/GCP is a plus.
Experience in the advertising domain is a big plus.
B. S. or M. S. degree in Computer Science, Engineering, or equivalent.
Location: Pune, Maharashtra.
We are looking for Senior Software Engineers responsible for designing, developing, and maintaining large scale distributed ad technology systems. This would entail working on several different systems, platforms and technologies.Collaborate with various engineering teams to meet a range of technological challenges. You will work with our product team to contribute and influence the roadmap of our products and technologies and also influence and inspire team members.
Experience
- 3 - 10 Years
Required Skills
- 3+ years of work experience and a degree in computer science or a similar field
- Knowledgeable about computer science fundamentals including data structures, algorithms, and coding
- Enjoy owning projects from creation to completion and wearing multiple hats
- Product focused mindset
- Experience building distributed systems capable of handling large volumes of traffic
- Fluency with Java, Vertex, Redis, Relational Databases
- Possess good communication skills
- Enjoy working in a team-oriented environment that values excellence
- Have a knack for solving very challenging problems
- (Preferred) Previous experience in advertising technology or gaming apps
- (Preferred) Hands-on experience with Spark, Kafka or similar open-source software
Responsibilities
- Creating design and architecture documents
- Conducting code reviews
- Collaborate with others in the engineering teams to meet a range of technological challenges
- Build, Design and Develop large scale advertising technology system capable of handling tens of billions of events daily
Education
- UG - B.Tech/B.E. - Computers; PG - M.Tech - Computer
What We Offer:
- Competitive salary and benefits package.
- Opportunities for professional growth and development.
- A collaborative and inclusive work environment.
Salary budget upto 50 LPA or hike20% on current ctc
you can text me over linkedin for quick response
Strong Data / ETL Test Engineer
Mandatory (Total Experience): Must have 5+ years of overall experience in Testing/QA
Mandatory (Experience 1 ): Must have minimum 3+ years of hands-on end-to-end data testing/ETL testing experience, covering data extraction, transformation, loading validation, reconciliation, working across BI / Analytics / Data Warehouse / e-Governance platforms
Mandatory (Experience 2): Must have strong understanding and hands-on exposure to Data Warehouse concepts and processes, including fact & dimension tables, data models, data flows, aggregations, and historical data handling.
Mandatory (Experience 3): Must have experience in Data Migration Testing, including validation of completeness, correctness, reconciliation, and post-migration verification from legacy platforms to upgraded/cloud-based data platforms.
Mandatory (Experience 4): Must have independently handled test strategy, test planning, test case design, execution, defect management, and regression cycles for ETL and BI testing
Mandatory (Tools & Platforms)- Hands-on experience with ETL tools and SQL-based data validation is mandatory (Working knowledge or hands-on exposure to Redshift and/or Qlik will be considered sufficient)
Mandatory (Education & Academic Background): Must hold a Bachelor’s degree B.E./B.Tech else should have master's in M.Tech/MCA/M.Sc/MS
Mandatory (Communication Skills): Must demonstrate strong verbal and written communication skills, with the ability to work closely with business stakeholders, data teams, and QA leadership
Mandatory (Note): There are 2 offices Rajiv chowk and Patel chowk, candidate can be placed any office location
Mandatory Location: Candidate must be based within Delhi NCR (100 km radius)
Preferred
Role - Dynamics 365 Data Migration Engineer/Developer
Experience level: 5+ years
Location: Remote
Prior experience in Dynamics 365 data migration projects.
knowledge of SSIS, Azure Fabric, and Azure Data Factory.
Good understanding of Dataverse data structure and integration patterns.
Proficiency in SQL for data extraction and transformation.
Experience in preparing data mapping and migration documentation.
Collaborate with functional teams for data validation and reconciliation.
Prepare data mapping documents and ensure accurate transformation
Job Title: Python Developer (4–6 Years Experience)
Location: Mumbai (Onsite)
Experience: 4–6 Years
Salary: ₹50,000 – ₹90,000 per month (depending on experience & skill set)
Employment Type: Full-time
Job Description
We are looking for an experienced Python Developer to join our growing team in Mumbai. The ideal candidate will have strong hands-on experience in Python development, building scalable backend systems, and working with databases and APIs.
Key Responsibilities
- Design, develop, test, and maintain Python-based applications
- Build and integrate RESTful APIs
- Work with frameworks such as Django / Flask / FastAPI
- Write clean, reusable, and efficient code
- Collaborate with frontend developers, QA, and project managers
- Optimize application performance and scalability
- Debug, troubleshoot, and resolve technical issues
- Participate in code reviews and follow best coding practices
- Work with databases and ensure data security and integrity
- Deploy and maintain applications in staging/production environments
Required Skills & Qualifications
- 4–6 years of hands-on experience in Python development
- Strong experience with Django / Flask / FastAPI
- Good understanding of REST APIs
- Experience with MySQL / PostgreSQL / MongoDB
- Familiarity with Git and version control workflows
- Knowledge of OOP concepts and design principles
- Experience with Linux-based environments
- Understanding of basic security and performance optimization
- Ability to work independently as well as in a team
Good to Have (Preferred Skills)
- Experience with AWS / cloud services
- Knowledge of Docker / CI-CD pipelines
- Exposure to microservices architecture
- Basic frontend knowledge (HTML, CSS, JavaScript)
- Experience working in an Agile/Scrum environment
Job Type: Full-time
Application Question(s):
- If selected, how soon can you join?
Experience:
- Total: 3 years (Required)
- Python: 3 years (Required)
Location:
- Mumbai, Maharashtra (Required)
Work Location: In person
We are seeking a developer who can create scalable, solutions with focus on Power BI based
technologies.
Roles and Responsibilities
- Develop Power BI reports & effective dashboards by gathering and translating end user requirements.
- Experience in optimizing Microsoft Power BI dashboards with a focus on usability, performance, flexibility, testability, and standardization
- Ability to turn large amounts of raw data into actionable information.
- Oversee the development process of the Product, POC and demo versions
- Maintain design standards on all dashboards
- Ability to setup security on all dashboards to ensure data compliance
- Perform optimization of database Queries
- Bring new technologies/ideas to the team
- Be part of our growing BI Practice
Requirements
- Hands on Experience in Microsoft BI, advanced analytics, and SQL Server Reporting Services.
- Hands-on professional with thorough knowledge of scripting, data source integration and advanced GUI development in Power BI.
- Excellent skills in writing DAX scripts.
- Creating Row level security with Power BI.
- Proficient in embedding PowerBI to web based application
- Familiar with Power BI custom add-on development & deployment.
- Prior experience in connecting Power BI with on-premise and cloud computing platforms
- Good knowledge on Data warehousing
- Able to provide optimized SQL Scripts for complex scenarios
- Working knowledge of Business Intelligence Systems & Tools (Microsoft Power BI, SQL Server Reporting Services)
- Good understanding of the processes of data quality, data cleansing and data transformation.
- Candidate should possess very good communication with strong business analyst skills.
- Hands on experience in SSIS, SSRS, SSAS is a plus
Hiring for Data Engineer
Exp : 4 - 6 yrs
Edu : BE/B.Tech
Work Location : Noida WFO
Notice Period : Immediate
Skilla : Pyspark , SQL , AWS/GCP , Hadoop
Location: Hybrid (Bangalore)
Travel: Quarterly travel to Seattle(US)
Education: B.Tech from premium institutes only
Note: Only immediate joiners required/ 0 to 15 Days — no other applications accepted.
Role Summary
We are seeking top-tier Lead Engineers who can design, build, and deliver large-scale distributed systems with high performance, reliability, and operational excellence. The ideal candidate will be a hands-on engineer with expert system design ability, deep understanding of distributed architectures, and strong communication and leadership skills.
The Lead Engineer must be able to convert complex and ambiguous requirements into a fully engineered architecture and implementation plan covering components, data flows, infrastructure, observability, and operations.
Key Responsibilities
1. End-to-End System Architecture
- Architect scalable, reliable, and secure systems from initial concept through production rollout.
- Define system boundaries, components, service responsibilities, and integration points.
- Produce high-level (HLD) and low-level design (LLD) documents.
- Ensure designs meet performance, reliability, security, and cost objectives.
- Make informed design trade-offs with solid technical reasoning.
2. Component & Communication Design
- Break complex systems into independently deployable services.
- Define APIs, communication contracts, data models, and event schemas.
- Apply modern architecture patterns such as microservices, event-driven design, DDD, CQRS, and hexagonal architecture.
- Ensure component clarity, maintainability, and extensibility.
3. Communication Protocol & Middleware
- Design both sync and async communication layers: REST, RPC, gRPC, message queues, event streams (Kafka/Kinesis/Pulsar).
- Define retry/timeout strategies, circuit breakers, rate limiting, and versioning strategies.
- Handle backpressure, partitioning, delivery semantics (at-least/at-most/exactly once).
4. Data Architecture & Storage Strategy
- Architect data models and storage strategies for SQL and NoSQL databases, distributed caches, blob stores, and search indexes.
- Define sharding/partitioning, replication, consistency, indexing, backup/restore, and schema evolution strategies.
- Design real-time and batch data processing pipelines.
5. Operational Readiness
- Define observability (metrics, logs, traces) requirements.
- Collaborate with DevOps to ensure deployment, monitoring, alerts, and incident management readiness.
- Provide production support as a senior technical owner.
6. Leadership & Influence
- Lead technical discussions, design reviews, and cross-team collaboration.
- Mentor engineers and help elevate team practices.
- Influence technology direction and architectural standards.
Required Qualifications
- 10+ years of professional software engineering experience with strong backend and distributed systems background.
- Proven track record of leading large-scale architecture and delivery of production systems.
- Expert in system design with the ability to simplify ambiguity and craft robust solutions.
- Strong programming experience in one or more languages (Java, Go, Python, C++).
- Deep understanding of distributed systems, message streaming, queues, RPC/REST, and event-driven architecture.
- Experience with cloud platforms (AWS/Azure/GCP) and container technologies (Kubernetes/Docker).
- Strong communication, documentation, and leadership skills.
Preferred Skills
- Experience with large-scale messaging/streaming (Kafka/Pulsar), caching, and NoSQL.
- Experience designing for high availability, fault tolerance, and performance at scale.
- Mentoring and leading global engineering teams.
- Familiarity with observability tooling (Grafana, Prometheus, Jaeger).
We are seeking a Node.js Developer to build and maintain backend systems for University ERP, Examination Management, and LMS platforms, ensuring secure, scalable, and high-performance applications.
Key Responsibilities
Develop backend services using Node.js & Express.js
Build APIs for exam workflows, results, LMS modules, and ERP integrations
Manage databases (MongoDB / MySQL)
Implement role-based access, data security, and performance optimization
Integrate third-party services (payments, notifications, proctoring)
Collaborate with product, QA, and implementation teams
Required Skills
Node.js, Express.js, JavaScript
Database: MongoDB / SQL
API security (JWT, OAuth)
Experience in ERP / Exam / LMS systems preferred
About the Role
Hudson Data is looking for a Senior / Mid-Level SQL Engineer to design, build, optimize, and manage our data platforms. This role requires strong hands-on expertise in SQL, Google Cloud Platform (GCP), and Linux to support high-performance, scalable data solutions.
We are also hiring Python Programers / Software Developers / Front end and Back End Engineers
Key Responsibilities:
1.Develop and optimize complex SQL queries, views, and stored procedures
- Build and maintain data pipelines and ETL workflows on GCP (e.g., BigQuery, Cloud SQL)
- Manage database performance, monitoring, and troubleshooting
- Work extensively in Linux environments for deployments and automation
- Partner with data, product, and engineering teams on data initiatives
Required Skills & Qualifications
Must-Have Skills (Essential)
- Expert GCP mandatory
- Strong Linux / shell scripting mandatory
Nice to Have
- Experience with data warehousing and ETL frameworks
- Python / scripting for automation
- Performance tuning and query optimization experience
Soft Skills
- Strong analytical, problem-solving, and critical-thinking abilities.
- Excellent communication and presentation skills, including data storytelling.
- Curiosity and creativity in exploring and interpreting data.
- Collaborative mindset, capable of working in cross-functional and fast-paced environments.
Education & Certifications
- Bachelors degree in Data Science, Computer Science, Statistics, Mathematics, or a related field.
- Masters degree in Data Analytics, Machine Learning, or Business Intelligence preferred.
⸻
Why Join Hudson Data
At Hudson Data, youll be part of a dynamic, innovative, and globally connected team that uses cutting-edge tools from AI and ML frameworks to cloud-based analytics platforms to solve meaningful problems. Youll have the opportunity to grow, experiment, and make a tangible impact in a culture that values creativity, precision, and collaboration.
Role Summary:
We are seeking experienced Application Support Engineers to join our client-facing support team. The ideal candidate will be the first point of contact for client issues, ensuring timely resolution, clear communication, and high customer satisfaction in a fast-paced trading environment.
Key Responsibilities:
• Act as the primary contact for clients reporting issues related to trading applications and platforms.
• Log, track, and monitor issues using internal tools and ensure resolution within defined TAT (Turnaround Time).
• Liaise with development, QA, infrastructure, and other internal teams to drive issue resolution.
• Provide clear and timely updates to clients and stakeholders regarding issue status and resolution.
• Maintain comprehensive logs of incidents, escalations, and fixes for future reference and audits.
• Offer appropriate and effective resolutions for client queries on functionality, performance, and usage.
• Communicate proactively with clients about upcoming product features, enhancements, or changes.
• Build and maintain strong relationships with clients through regular, value-added interactions.
• Collaborate in conducting UAT, release validations, and production deployment verifications.
• Assist in root cause analysis and post-incident reviews to prevent recurrences.
Required Skills & Qualifications:
• Bachelor's degree in computer science, IT, or related field.
• 2+ years in Application/Technical Support, preferably in the broking/trading domain.
• Sound understanding of capital markets – Equity, F&O, Currency, Commodities.
• Strong technical troubleshooting skills – Linux/Unix, SQL, log analysis.
• Familiarity with trading systems, RMS, OMS, APIs (REST/FIX), and order lifecycle.
• Excellent communication and interpersonal skills for effective client interaction.
• Ability to work under pressure during trading hours and manage multiple priorities.
• Customer-centric mindset with a focus on relationship building and problem-solving.
Nice to Have:
• Exposure to broking platforms like NOW, NEST, ODIN, or custom-built trading tools.
• Experience interacting with exchanges (NSE, BSE, MCX) or clearing corporations.
• Knowledge of scripting (Shell/Python) and basic networking is a plus.
• Familiarity with cloud environments (AWS/Azure) and monitoring tools.
Why Join Us?
• Be part of a team supporting mission-critical systems in real-time.
• Work in a high-energy, tech-driven environment.
• Opportunities to grow into domain/tech leadership roles.
• Competitive salary and benefits, health coverage, and employee wellness programs.
JOB DETAILS:
- Job Title: Senior Business Analyst
- Industry: Ride-hailing
- Experience: 4-7 years
- Working Days: 5 days/week
- Work Mode: ONSITE
- Job Location: Bangalore
- CTC Range: Best in Industry
Required Skills: Data Visualization, Data Analysis, Strong in Python and SQL, Cross-Functional Communication & Stakeholder Management
Criteria:
1. Candidate must have 4–7 years of experience in analytics / business analytics roles.
2. Candidate must be currently based in Bangalore only (no relocation allowed).
3. Candidate must have hands-on experience with Python and SQL.
4. Candidate must have experience working with databases/APIs (Mongo, Presto, REST or similar).
5. Candidate must have experience building dashboards/visualizations (Tableau, Metabase or similar).
6. Candidate must be available for face-to-face interviews in Bangalore.
7. Candidate must have experience working closely with business, product, and operations teams.
Description
Job Responsibilities:
● Acquiring data from primary/secondary data sources like mongo/presto/Rest APIs.
● Candidate must have strong hands-on experience in Python and SQL.
● Build visualizations to communicate data to key decision-makers and preferably familiar with building interactive dashboards in Tableau/Metabase
● Establish relationship between output metric and its drivers in order to identify critical drivers and control the critical drivers so as to achieve the desired value of output metric
● Partner with operations/business teams to consult, develop and implement KPIs, automated reporting/process solutions, and process improvements to meet business needs
● Collaborating with our business owners + product folks and perform data analysis of experiments and recommend the next best action for the business. Involves being embedded into business decision teams for driving faster decision making
● Collaborating with several functional teams within the organization and use raw data and metrics to back up assumptions, develop hypothesis/business cases and complete root cause analyses; thereby delivering output to business users
Job Requirements:
● Undergraduate and/or graduate degree in Math, Economics, Statistics, Engineering, Computer Science, or other quantitative field.
● Around 4-6 years of experience being embedded in analytics and adjacent business teams working as analyst aiding decision making
● Proficiency in Excel and ability to structure and present data in creative ways to drive insights
● Some basic understanding of (or experience in) evaluating financial parameters like return-on-investment (ROI), cost allocation, optimization, etc. is good to have
👉 ● Candidate must have strong hands-on experience in Python and SQL.
What’s there for you?
● Opportunity to understand the overall business & collaborate across all functional departments
● Prospect to disrupt the existing mobility industry business models (ideate, pilot, monitor & scale)
● Deal with the ambiguity of decision making while balancing long-term/strategic business needs and short-term/tactical moves
● Full business ownership working style which translates to freedom to pick problem statements/workflow and self-driven culture
The Power BI Intern will assist the analytics team in using Microsoft Power BI to create interactive dashboards and reports. Working with actual datasets to assist well-informed business decision-making, this position provides practical exposure to data analysis, visualization, and business intelligence techniques.

Profitable E-comm/NBFC company close to becoming a Unicorn.
Want to build core backend-powered experiences such as Checkout and Credit for the 5th largest E-com portal in the country? Then read on..
About The company
Founded by serial Entrepreneurs from IIT Bombay, Snapmint is challenging the way banking is done by building the banking experience ground up. Our first product provides purchase financing at 0% interest rates to 300 Million consumers in India who do not have credit cards using instant credit scoring and advanced underwriting systems. We look at hundreds of variables, much beyond traditional credit models. With real time credit approval, seamless digital loan servicing and repayments technology we are revolutionizing the way banking is done for todays smartphone-wielding Indian.
Website: https://snapmint.com
LinkedIn: https://www.linkedin.com/company/snapmintfinserv/
Title: Senior Engineering Manager, Backend
Experience: 8-12 Years
Work Location: Gurgaon (Unitech Cyber Park, Sector 39)
Working Arrangement: 5 days (WFO)
Job Overview:
As Engineering Manager Backend, you will lead a team of backend engineers, driving the development of scalable, reliable, and performant systems. You will work closely with product management, front-end engineers, and other cross-functional teams to deliver high-quality solutions while ensuring alignment with the companys technical and business goals. You will play a key role in coaching and mentoring engineers, promoting best practices, and helping to grow the backend engineering capabilities.
Key Responsibilities:
- Design and build highly scalable, low-latency, fault-tolerant backend services handling high-volume financial transactions.
- Work hands-on with the team on core backend development, architecture, and production issues.
- Lead, mentor, and manage a team of backend engineers, ensuring high-quality delivery and fostering a collaborative work environment.
- Collaborate with product managers, engineers, and other stakeholders to define technical solutions and design scalable backend architectures.
- Own the development and maintenance of backend systems, APIs, and services.
- Drive technical initiatives, including infrastructure improvements, performance optimizations, and platform scalability.
- Guide the team in implementing industry best practices for code quality, security, and performance.
- Participate in code reviews, providing constructive feedback and maintaining high coding standards.
- Promote agile methodologies and ensure the team adheres to sprint timelines and goals.
- Develop and track key performance indicators (KPIs) to measure team productivity and system reliability.
- Foster a culture of continuous learning, experimentation, and improvement within the backend engineering team.
Requirements:
- Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
- 8+ years of experience in backend development with a proven track record of leading engineering teams.
- Strong experience with backend language ie. Node.js/Golang
- Experience working with databases (SQL, NoSQL), caching systems, and RESTful APIs.
- Familiarity with cloud platforms like AWS, GCP, or Azure and containerization technologies (e.g., Docker, Kubernetes).
- Solid understanding of software development principles, version control, and CI/CD practices.
- Excellent problem-solving skills and the ability to architect complex systems.
- Strong leadership, communication, and interpersonal skills.
- Ability to thrive in a fast-paced, dynamic environment and manage multiple priorities effectively.
Job Description
DesignByte Studio is looking for a passionate MERN Stack Developer who enjoys building real world products and learning through hands on work. This role is ideal for someone at the start of their career who wants to grow by working on live projects with a small and focused team.
You will work closely with designers and product members to build modern web applications and improve existing features. The role is fully remote and outcome focused.
Responsibilities
Build and maintain web applications using MongoDB, Express, React and Node.js
Develop clean, reusable and scalable frontend components
Integrate APIs and work with backend logic and databases
Fix bugs, improve performance and maintain code quality
Collaborate with the team on feature planning and implementation
Required Skills
Good understanding of JavaScript fundamentals
Basic to intermediate knowledge of React and Node.js
Familiarity with MongoDB and REST APIs
Understanding of HTML, CSS and modern frontend practices
Basic knowledge of Git and version control
Good to Have
Experience with Next.js or Tailwind CSS
Basic understanding of authentication and database design
Any personal or academic projects using the MERN stack
What We Offer
Remote work environment
Opportunity to work on real products and client projects
Learning focused culture with mentorship
Career growth based on performance and skills
Experience required is 0 to 2 year. Salary range is ₹2L to ₹5L per year.
Roles & Responsibilities
- Data Engineering Excellence: Design and implement data pipelines using formats like JSON, Parquet, CSV, and ORC, utilizing batch and streaming ingestion.
- Cloud Data Migration Leadership: Lead cloud migration projects, developing scalable Spark pipelines.
- Medallion Architecture: Implement Bronze, Silver, and gold tables for scalable data systems.
- Spark Code Optimization: Optimize Spark code to ensure efficient cloud migration.
- Data Modeling: Develop and maintain data models with strong governance practices.
- Data Cataloging & Quality: Implement cataloging strategies with Unity Catalog to maintain high-quality data.
- Delta Live Table Leadership: Lead the design and implementation of Delta Live Tables (DLT) pipelines for secure, tamper-resistant data management.
- Customer Collaboration: Collaborate with clients to optimize cloud migrations and ensure best practices in design and governance.
Educational Qualifications
- Experience: Minimum 5 years of hands-on experience in data engineering, with a proven track record in complex pipeline development and cloud-based data migration projects.
- Education: Bachelor’s or higher degree in Computer Science, Data Engineering, or a related field.
- Skills
- Must-have: Proficiency in Spark, SQL, Python, and other relevant data processing technologies. Strong knowledge of Databricks and its components, including Delta Live Table (DLT) pipeline implementations. Expertise in on-premises to cloud Spark code optimization and Medallion Architecture.
Good to Have
- Familiarity with AWS services (experience with additional cloud platforms like GCP or Azure is a plus).
Soft Skills
- Excellent communication and collaboration skills, with the ability to work effectively with clients and internal teams.
- Certifications
- AWS/GCP/Azure Data Engineer Certification.
Location - Vashi, Navi Mumbai (On-Site)
Exp - 0-1 Years
Stipend - 8k - 12k inr
Graduated and Immediate Joiners Preferred
About Arcitech AI :-
Arcitech AI is a technology-driven company specializing in software development, AI-powered
solutions, mobile app innovation, and digital transformation. Based in Mumbai’s dynamic Lower
Parel, we foster a culture focused on innovation, collaboration, and continuous learning,
empowering teams to build impactful digital products.
Role Overview :-
As a WordPress Developer Intern at Arcitech AI, you will gain hands-on experience in building and maintaining WordPress websites under the guidance of experienced developers. This role is ideal for aspiring web developers who are eager to learn real-world development practices, improve their technical skills, and work on live projects in a supportive, fast-paced environment.
Job Responsibilities :-
a) Assist in developing and maintaining WordPress websites
b) Customize existing WordPress themes and plugins as per design requirements
c) Implement UI designs using HTML, CSS, and basic JavaScript
d) Work with page builders such as Elementor and Gutenberg
e) Fix basic bugs, UI issues, and layout inconsistencies
f) Ensure responsive design across multiple devices and screen sizes
g) Support senior developers in ongoing WordPress and web development projects
Job Requirements :-
a) Basic knowledge of WordPress CMS and its dashboard
b) Understanding of HTML and CSS
c) Basic knowledge of JavaScript
d) Familiarity with PHP at a beginner level
e) Understanding of responsive web design principles
f) Willingness to learn, adapt quickly, and take feedback positively
g) Personal projects, demo websites, or portfolio links are preferred
h) Basic knowledge of MySQL is a plus
i) Exposure to hosting panels such as cPanel, Hostinger, and FTP tools
We are seeking a highly skilled software developer with proven experience in developing and scaling education ERP solutions. The ideal candidate should have strong expertise in Node.js or PHP (Laravel), MySQL, and MongoDB, along with hands-on experience in implementing ERP modules such as HR, Exams, Inventory, Learning Management System (LMS), Admissions, Fee Management, and Finance.
Key Responsibilities
Design, develop, and maintain scalable Education ERP modules.
Work on end-to-end ERP features, including HR, exams, inventory, LMS, admissions, fees, and finance.
Build and optimize REST APIs/GraphQL services and ensure seamless integrations.
Optimize system performance, scalability, and security for high-volume ERP usage.
Conduct code reviews, enforce coding standards, and mentor junior developers.
Stay updated with emerging technologies and recommend improvements for ERP solutions.
Required Skills & Qualifications
Strong expertise in Node.js and PHP (Laravel, Core PHP).
Proficiency with MySQL, MongoDB, and PostgreSQL (database design & optimization).
Frontend knowledge: JavaScript, jQuery, HTML, CSS (React/Vue preferred).
Experience with REST APIs, GraphQL, and third-party integrations (payment gateways, SMS, and email).
Hands-on with Git/GitHub, Docker, and CI/CD pipelines.
Familiarity with cloud platforms (AWS, Azure, GCP) is a plus.
4+ years of professional development experience, with a minimum of 2 years in ERP systems.
Preferred Experience
Prior work in the education ERP domain.
Deep knowledge of HR, Exam, Inventory, LMS, Admissions, Fees & Finance modules.
Exposure to high-traffic enterprise applications.
Strong leadership, mentoring, and problem-solving abilities
Benefit:
Permanent Work From Home
Company Description
eShipz is a rapidly expanding logistics automation platform designed to optimize shipping operations and enhance post-purchase customer experiences. The platform offers solutions such as multi-carrier integrations, real-time tracking, NDR management, returns, freight audits, and more. Trusted by over 350 businesses, eShipz provides easy-to-use analytics, automated shipping processes, and reliable customer support. As a trusted partner for eCommerce businesses and enterprises, eShipz delivers smarter, more efficient shipping solutions. Visit www.eshipz.com for more information.
Role Description
The Python Support Engineer role at eShipz requires supporting clients by providing technical solutions and resolving issues related to the platform. Responsibilities include troubleshooting reported problems, delivering technical support in a professional manner, and assisting with software functionality and operating systems. The engineer will also collaborate with internal teams to ensure a seamless customer experience. This is a full-time on-site role located in Sanjay Nagar, Greater Bengaluru Area.
Qualifications
- Strong proficiency in Troubleshooting and Technical Support skills to identify and address software or technical challenges effectively.
- Capability to provide professional Customer Support and Customer Service, ensuring high customer satisfaction and resolving inquiries promptly.
- Proficiency and knowledge of Operating Systems to diagnose and resolve platform-specific issues efficiently.
- Excellent problem-solving, communication, and interpersonal skills.
- Bachelor's degree in computer science, IT, or a related field.
- Experience working with Python and an understanding of backend systems is a plus.
- Technical Skill:
- Python Proficiency: Strong understanding of core Python (Data structures, decorators, generators, and exception handling).
- Frameworks: Familiarity with web frameworks like Django, Flask, or FastAPI.
- Databases: Proficiency in SQL (PostgreSQL/MySQL) and understanding of ORMs like SQLAlchemy or Django ORM.
- Infrastructure: Basic knowledge of Linux/Unix commands, Docker, and CI/CD pipelines (Jenkins/GitHub Actions).
- Version Control: Comfortable using Git for branching, merging, and pull requests.
- Soft Skill:
- Analytical Thinking: A logical approach to solving complex, "needle-in-a-haystack" problems.
- Communication: Ability to explain technical concepts to both developers and end-users.
- Patience & Empathy: Managing high-pressure situations when critical systems are down.
- Work Location: Sanjay Nagar, Bangalore (WFO)
- Work Timing :
- Mon - Fri (WFO)(9:45 A.M. - 6: 15 P.M.)
- 1st & 3rd SAT (WFO)(9:00 A.M. - 2:00 P.M.)
- 2nd & 4th SAT (WFH)(9:00 A.M. - 2:00 P.M.)


























