Cutshort logo

50+ SQL Jobs in India

Apply to 50+ SQL Jobs on CutShort.io. Find your next job, effortlessly. Browse SQL Jobs and apply today!

Sql jobs in other cities
ESQL JobsESQL Jobs in PuneMySQL JobsMySQL Jobs in AhmedabadMySQL Jobs in Bangalore (Bengaluru)MySQL Jobs in BhubaneswarMySQL Jobs in ChandigarhMySQL Jobs in ChennaiMySQL Jobs in CoimbatoreMySQL Jobs in Delhi, NCR and GurgaonMySQL Jobs in HyderabadMySQL Jobs in IndoreMySQL Jobs in JaipurMySQL Jobs in Kochi (Cochin)MySQL Jobs in KolkataMySQL Jobs in MumbaiMySQL Jobs in PunePL/SQL JobsPL/SQL Jobs in AhmedabadPL/SQL Jobs in Bangalore (Bengaluru)PL/SQL Jobs in ChandigarhPL/SQL Jobs in ChennaiPL/SQL Jobs in CoimbatorePL/SQL Jobs in Delhi, NCR and GurgaonPL/SQL Jobs in HyderabadPL/SQL Jobs in IndorePL/SQL Jobs in JaipurPL/SQL Jobs in KolkataPL/SQL Jobs in MumbaiPL/SQL Jobs in PunePostgreSQL JobsPostgreSQL Jobs in AhmedabadPostgreSQL Jobs in Bangalore (Bengaluru)PostgreSQL Jobs in BhubaneswarPostgreSQL Jobs in ChandigarhPostgreSQL Jobs in ChennaiPostgreSQL Jobs in CoimbatorePostgreSQL Jobs in Delhi, NCR and GurgaonPostgreSQL Jobs in HyderabadPostgreSQL Jobs in IndorePostgreSQL Jobs in JaipurPostgreSQL Jobs in Kochi (Cochin)PostgreSQL Jobs in KolkataPostgreSQL Jobs in MumbaiPostgreSQL Jobs in PunePSQL JobsPSQL Jobs in Bangalore (Bengaluru)PSQL Jobs in ChennaiPSQL Jobs in Delhi, NCR and GurgaonPSQL Jobs in HyderabadPSQL Jobs in MumbaiPSQL Jobs in PuneRemote SQL JobsRpgSQL JobsRpgSQL Jobs in HyderabadSQL Jobs in AhmedabadSQL Jobs in Bangalore (Bengaluru)SQL Jobs in BhubaneswarSQL Jobs in ChandigarhSQL Jobs in ChennaiSQL Jobs in CoimbatoreSQL Jobs in Delhi, NCR and GurgaonSQL Jobs in HyderabadSQL Jobs in IndoreSQL Jobs in JaipurSQL Jobs in Kochi (Cochin)SQL Jobs in KolkataSQL Jobs in MumbaiSQL Jobs in PuneTransact-SQL JobsTransact-SQL Jobs in Bangalore (Bengaluru)Transact-SQL Jobs in ChennaiTransact-SQL Jobs in HyderabadTransact-SQL Jobs in JaipurTransact-SQL Jobs in Pune
icon
Hypersonix Inc

at Hypersonix Inc

2 candid answers
1 product
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
4yrs+
Upto ₹30L / yr (Varies
)
SQL
skill iconPython
Data engineering
Big Data
skill iconAmazon Web Services (AWS)
+1 more

About the Company

Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.


About the Role

We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.


Roles and Responsibilities

  • Create and maintain optimal data pipeline architecture
  • Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
  • Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
  • Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
  • Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader


Requirements

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
  • Strong analytic skills related to working with unstructured datasets
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management
  • A successful history of manipulating, processing and extracting value from large disconnected datasets
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
  • Experience supporting and working with cross-functional teams in a dynamic environment
  • We are looking for a candidate with 4+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.
Read more
Furrl
Sricharan KS
Posted by Sricharan KS
Bengaluru (Bangalore)
0 - 2 yrs
₹25000 - ₹35000 / mo
SQL

About Furrl

Furrl is a high scale discovery experience for new-age D2C brands. Furrl is breaking the clutter of over 100,000 such brands through a novel #Vibe-based discovery experience and attacking a USD 100 billion market. This asset-light platform is a global first-of-its-kind, and rapid growth and customer love have already demonstrated early product-market-fit.

We’re looking for a Product Intern who enjoys working with data and is curious about how digital products are built.

Your main job will be to help the Product team understand user behaviour, pull data, and share insights that improve the app.

This is perfect for students who enjoy problem-solving, numbers, and learning how product teams work in real life.


Location: HSR Layout, Bangalore 


Responsibilities

  • Run SQL queries to pull data
  • Work with the Product team to analyse trends and numbers
  • Create simple reports and dashboards
  • Help the team with product experiments (like A/B tests)
  • Support in documentation and basic product research

Requirements 

  • Basic SQL knowledge (very important)
  • Comfortable working with numbers
  • Curious, responsible, and eager to learn
  • Good communication skills
  • Any analytics tools (GA, Mixpanel, etc.) (Not mandatory, but good if you already know)
  • Previous project or mini-internship experience 

What to get excited about 

  • Be a part of a strong early team building a massive business.
  • Work directly with the CEO and other leadership team members, who are well-respected industry experts.
  • Get the rare chance to see a 0 to 1 journey and be a key member of that journey.
  • Accelerate your career with a rapid growth path within the organisation.
  • Strong possibility of PPO (Pre-Placement Offer) based on performance.



Read more
Service Co

Service Co

Agency job
via Vikash Technologies by Rishika Teja
Mumbai
6 - 8 yrs
₹15L - ₹18L / yr
Test Automation (QA)
Manual testing
SQL

Hiring for Lead QA


Exp: 6 - 8 yrs

Work Location : Vikhroli Mumbai

WFO


6 yrs years of proven experience as a Quality Assurance Engineer.


Experience with a variety of testing techniques such as UI Testing, Automated Testing, Test Driven Development Strategies and other Experience with coding using JAVA, HTML5, DB2, XML


Experience with web security technology & software QA tools and processes


l Experience in writing clear, concise and comprehensive test plans and test cases


Hands-on experience with both white box and black box testing


Hands-on experience with automated testing tools Solid knowledge of SQL and scripting

Read more
OIP Insurtech

at OIP Insurtech

2 candid answers
Katarina Vasic
Posted by Katarina Vasic
Remote, Hyderabad
3 - 8 yrs
Best in industry
SQL
PowerBI
Qlikview
Qlik
Policy administration

We’re looking for a dynamic and driven Data Analyst to join our team of technology enthusiasts. This role is crucial in transforming data into insights that support strategic decision-making and innovation within the insurance technology (InsurTech) space. If you’re passionate about working with data, understanding systems, and delivering value through analytics, we’d love to hear from you.


What We’re Looking For

  • Proven experience working as a Data Analyst or in a similar analytical role
  • 3+ Years of experience in the field
  • Strong command of SQL for querying and manipulating relational databases
  • Experience with Power BI for building impactful dashboards and reports
  • Familiarity with QlikView and Qlik Sense is a plus
  • Ability to communicate findings clearly to technical and non-technical stakeholders
  • Knowledge of Python or R for data manipulation is nice to have
  • Bachelor’s degree in Computer Science, Statistics, Mathematics, Economics, or a related field
  • Understanding of the insurance industry or InsurTech is a strong advantage


What You’ll Be Doing

  • Delivering timely and insightful reports to support strategic decision-making
  • Working extensively with Policy Administration System (PAS) data to uncover patterns and trends
  • Ensuring data accuracy and consistency across reports and systems
  • Collaborating with clients, underwriters, and brokers to translate business needs into data solutions
  • Organizing and structuring datasets, contributing to data engineering workflows and pipelines
  • Producing analytics to support business development and market strategy

Read more
RADCOM
Shreya Tiwari
Posted by Shreya Tiwari
Delhi
3 - 5 yrs
₹5L - ₹12L / yr
Linux/Unix
SQL
Telecom
5G
System deployment
+2 more

Dear Candidate


Looking for Telecom Advance Support Engineer (PSO)


If this opportunity hits you Kindly revert.


Job Description:


3+ years’ experience working as a support engineer / Network Engineer /deployment Engineer/ Solutions Engineer/integration Engineer in the telecom deployment industry. 

■ BE/ B.Sc. in CS, EE, Telecommunications graduate with honors from an elite university – (70%- 90% depending on the colleges. Delhi university a plus).

Telecom Knowledge (IMS, 4G) - Mandatory.

Linux OS, SQL knowledge – Mandatory. Vertica DB, scripting - a plus.

■ Open stack / Cloud based infrastructure (OpenStack/Kubernetes), knowledge is a plus

■ Be available to work off business hours to address critical matters/situations based on Radcom’s on-call support model

■ Willing to work in evening and night shifts and Weekend.

■ Fluent English – Mandatory

■ Valid passport 

Visible to teammates 

Add private requirements that your team and AI Resume Review will use to evaluate candidates. 

Public: Show on job boards 

■ Based in RADCOM India offices, Delhi, India. 

■ Responsible for all the technical support aspects required by the client for RADCOM’s solutions, including integration, deployment, and customization of applications; and KPI reports for individual customer needs.

■ Support and deployment of Radcom’s solution at cloud-based customer environments (case to case basis)

■ Having very good customer interaction interface and able to drive customer calls independently.

■ Able to drive and deliver internal customer small project - end to end handling along with taking care of all internal required communications.

■ Working closely with the management on customer updates and future plan

■ Daily maintenance and problem resolution, System patches and software upgrades, and routine system configuration

■ Identifies, diagnoses, and resolves issues related to System with good troubleshooting and root cause finding.

■ If required: travel to on-site support outside India, training, installation, and configuration, etc


Thanks & Regards

Shreya Tiwari

Technical Recruiter - HR

RADCOM


Read more
GrowthArc

at GrowthArc

2 candid answers
1 recruiter
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote, Bengaluru (Bangalore)
4yrs+
Upto ₹35L / yr (Varies
)
skill iconGo Programming (Golang)
RESTful APIs
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+4 more

Job Summary:

We are seeking an experienced Golang Developer with 4+ years of hands-on experience to design, develop, and maintain scalable Restful APIs and microservices. The ideal candidate should be proficient in cloud platforms and have strong problem-solving skills to work in dynamic environments.


Key Responsibilities:

  • Develop and maintain high-quality Restful APIs using Golang.
  • Design and implement microservices architecture for scalable applications.
  • Collaborate with cross-functional teams to define and deliver features.
  • Deploy, manage, and troubleshoot applications on cloud platforms (AWS, Azure, GCP, etc.).
  • Write efficient, reusable, and testable code following best practices.
  • Participate in code reviews, debugging, and performance tuning.
  • Ensure security and data protection in application development.

Qualifications:

  • 4+ years of professional experience in Golang development.
  • Strong knowledge of Restful API design and implementation.
  • Hands-on experience with microservices architecture.
  • Familiarity with one or more cloud platforms (AWS, Azure, GCP).
  • Experience with containerization technologies like Docker and Kubernetes is a plus.
  • Good understanding of CI/CD pipelines and DevOps practices.
  • Excellent problem-solving and communication skills.
Read more
Bengaluru (Bangalore)
4 - 6 yrs
₹8L - ₹14L / yr
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconHTML/CSS
skill iconJavascript
SQL
+2 more

Project Overview :


Join our AI Bot development team focused on building a scenario-based training platform for contact center agents. The system delivers mock call simulations, AI-driven feedback, and supervisor dashboards. Youll work with Node.js for backend orchestration and React.js for frontend redaction and annotation.


Key Responsibilities :


- Develop backend services using Node.js, including API orchestration and integration with AI/ML services.


- Implement frontend redaction features using Redact.js, integrated into React.js dashboards.


- Collaborate with AI/ML engineers to embed intelligent feedback and behavioral analysis.


- Build secure, multi-tenant systems with role-based access control (RLS).


- Optimize performance for real-time audio analysis and transcript synchronization.


- Participate in agile grooming sessions and contribute to architectural decisions.


Required Skills :


- Experience with React.js or similar annotation/redaction libraries.


- Strong understanding of RESTful APIs, React.js, and Material-UI.


- Familiarity with Azure services, SQL, and authentication protocols (SSO, JWT).


- Experience with secure session management and data protection standards.


Preferred Qualifications :


- Exposure to AI/ML workflows and Python-based services.


- Experience with Livekit or similar real-time communication platforms.


- Familiarity with Power BI and accessibility standards (WCGA).


Soft Skills :


- Problem-solving mindset and adaptability.


- Ability to work independently and meet tight deadlines.

Read more
Appiness Interactive
Remote only
6 - 10 yrs
₹10L - ₹14L / yr
skill iconPython
skill iconDjango
FastAPI
skill iconFlask
pandas
+9 more

Position Overview: The Lead Software Architect - Python & Data Engineering is a senior technical leadership role responsible for designing and owning end-to-end architecture for data-intensive, AI/ML, and analytics platforms, while mentoring developers and ensuring technical excellence across the organization. 


Key Responsibilities: 

  • Design end-to-end software architecture for data-intensive applications, AI/ML pipelines, and analytics platforms
  • Evaluate trade-offs between competing technical approaches 
  • Define data models, API approach, and integration patterns across systems 
  • Create technical specifications and architecture documentation 
  • Lead by example through production-grade Python code and mentor developers on engineering fundamentals 
  • Conduct design and code reviews focused on architectural soundness 
  • Establish engineering standards, coding practices, and design patterns for the team 
  • Translate business requirements into technical architecture 
  • Collaborate with data scientists, analysts, and other teams to design integrated solutions 
  • Whiteboard and defend system design and architectural choices 
  • Take responsibility for system performance, reliability, and maintainability 
  • Identify and resolve architectural bottlenecks proactively 


Required Skills:  

  • 8+ years of experience in software architecture and development  
  • Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field 
  • Strong foundations in data structures, algorithms, and computational complexity 
  • Experience in system design for scale, including caching strategies, load balancing, and asynchronous processing  
  • 6+ years of Python development experience 
  • Deep knowledge of Django, Flask, or FastAPI 
  • Expert understanding of Python internals including GIL and memory management 
  • Experience with RESTful API design and event-driven architectures (Kafka, RabbitMQ) 
  • Proficiency in data processing frameworks such as Pandas, Apache Spark, and Airflow 
  • Strong SQL optimization and database design experience (PostgreSQL, MySQL, MongoDB)  Experience with AWS, GCP, or Azure cloud platforms 
  • Knowledge of containerization (Docker) and orchestration (Kubernetes) 
  • Hands-on experience designing CI/CD pipelines Preferred (Bonus) 


Skills

  • Experience deploying ML models to production (MLOps, model serving, monitoring) Understanding of ML system design including feature stores and model versioning 
  • Familiarity with ML frameworks such as scikit-learn, TensorFlow, and PyTorch  
  • Open-source contributions or technical blogging demonstrating architectural depth 
  • Experience with modern front-end frameworks for full-stack perspective


Read more
CT Nova
Sanchit Gupta
Posted by Sanchit Gupta
Remote only
3 - 15 yrs
₹25L - ₹50L / yr
skill icon.NET
Windows Azure
SQL
skill iconReact.js
Microservices

Experience: 3+ years (Backend/Full-Stack)


Note: You will be the 3rd engineer on the team. If you are comfortable with Java and Springboot plus Cloud, then you will easily be able to pick up the following stack.


Key Requirements —

  • Primary Stack: Experience with .NET
  • Cloud: Solid understanding of cloud platforms (preferably Azure)
  • Frontend/DevOps: Familiarity with React and DevOps practices
  • Architecture: Strong grasp of microservices
  • Technical Skills: Basic proficiency in scripting, databases, and Git


Compensation: competitive salary, based on experience and fit

Read more
PalTech

at PalTech

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
4 - 8 yrs
Upto ₹32L / yr (Varies
)
SQL
skill iconPython
Software Testing (QA)
Data validation
PySpark

As a Data Quality Engineer at PalTech, you will be responsible for designing and executing comprehensive test strategies for end-to-end data validation. Your role will ensure data completeness, accuracy, and integrity across ETL processes, data warehouses, and reporting environments. You will automate data validation using Python, validate fact and dimension tables, large datasets, file ingestions, and data exports, while ensuring adherence to data security standards, including encryption and authorization. This role requires strong analytical abilities, proficiency in SQL and Python, and the capability to collaborate effectively with cross-functional teams to drive continuous improvements through automation and best practices.


Key Responsibilities

  • Create test strategies, test plans, business scenarios, and data validation scripts for end-to-end data validation.
  • Verify data completeness, accuracy, and integrity throughout ETL processes, data pipelines, and reports.
  • Evaluate and monitor the performance of ETL jobs to ensure adherence to defined SLAs.
  • Automate data testing processes using Python or other relevant technologies.
  • Validate various types of fact and dimension tables within data warehouse environments.
  • Apply strong data warehousing (DWH) skills to ensure accurate data modeling and validation.
  • Validate large datasets and ensure accuracy across relational databases.
  • Validate file ingestions and data exports across different data sources.
  • Assess and validate implementation of data security standards (encryption, authorization, anonymization).
  • Demonstrate proficiency in SQL, Python, and ETL/ELT validation techniques.
  • Validate reports and dashboards built on Power BI, Tableau, or similar platforms.
  • Write complex scripts to validate business logic and KPIs across datasets.
  • Create test data as required based on business use cases and scenarios.
  • Identify, validate, and test corner business cases and edge scenarios.
  • Prepare comprehensive test documentation including test cases, test results, and test summary reports.
  • Collaborate closely with developers, business analysts, data architects, and other stakeholders.
  • Recommend enhancements and implement best practices to strengthen and streamline testing processes.

Required Skills and Qualifications

  • Education: Bachelor’s degree in Computer Science, Information Technology, or a related discipline.
  • Technical Expertise: Strong understanding of ETL processes, data warehousing concepts, SQL, and Python.
  • Experience: 4–6 years of experience in ETL testing, data validation, and report/dashboard validation; prior experience in automating data validation processes.
  • Tools: Hands-on experience with ETL tools such as ADF, DBT, etc., defect tracking systems like JIRA, and reporting platforms such as Power BI or Tableau.
  • Soft Skills: Excellent communication and teamwork abilities, with strong analytical and problem-solving skills.

Why Join PalTech?

  • Great Place to Work Certified: We prioritize employee well-being and nurture an inclusive, collaborative environment where everyone can thrive.
  • Competitive compensation, strong learning and professional g
Read more
Pivotree

Pivotree

Agency job
via AccioJob by AccioJobHiring Board
Bengaluru (Bangalore)
0 - 1 yrs
₹5L - ₹5.5L / yr
DSA
skill iconJava
SQL

AccioJob is conducting a Walk-In Hiring Drive with Pivotree for the position of Technical Support.


To apply, register and select your slot here: https://go.acciojob.com/mFZkWn


Required Skills: Java, DSA, SQL


Eligibility:

  • Degree: BTech./BE
  • Branch: Computer Science/CSE/Other CS related branch, Electrical/Other electrical related branches
  • Graduation Year: 2024, 2025


Work Details:

Work Location: Bangalore (Onsite)

CTC: 5 LPA to 5.5 LPA


Evaluation Process:

Round 1: Offline Assessment at AccioJob Bangalore Centre


Further Rounds (for shortlisted candidates only):

Resume Evaluation, Technical Interview 1, HR Discussion


Important Note: Bring your laptop & earphones for the test.


Register here: https://go.acciojob.com/mFZkWn


FAST SLOT BOOKING

[ DOWNLOAD ACCIOJOB APP ]

https://go.acciojob.com/9hznVG


Read more
PalTech

at PalTech

2 candid answers
Nikita Sinha
Posted by Nikita Sinha
Hyderabad
4 - 8 yrs
Upto ₹35L / yr (Varies
)
SQL
skill iconPython
ETL
ELT
Datawarehousing

As a Data Engineer at PalTech, you will design, develop, and maintain scalable and reliable data pipelines to ensure seamless data flow across systems. You will leverage SQL and leading ETL tools (such as Informatica, ADF, etc.) to support data integration and transformation needs. This role involves building and optimizing data warehouse architectures, performing performance tuning, and ensuring high levels of data quality, accuracy, and consistency throughout the data lifecycle.


You will collaborate closely with cross-functional teams to understand business requirements and translate them into effective data solutions. The ideal candidate should possess strong problem-solving skills, sound knowledge of data architecture principles, and a passion for building clean and efficient data systems.


Key Responsibilities

  • Design, develop, and maintain ETL/ELT pipelines using SQL and tools such as Informatica, ADF, etc.
  • Build and optimize data warehouse and data lake solutions for reporting, analytics, and operational usage.
  • Apply strong understanding of data warehousing concepts to architect scalable data solutions.
  • Handle large datasets and design effective load/update strategies.
  • Collaborate with data analysts, business users, and data scientists to understand requirements and deliver scalable solutions.
  • Implement data quality checks and validation frameworks to ensure data reliability and integrity.
  • Perform SQL and ETL performance tuning and optimization.
  • Work with structured and semi-structured data from various source systems.
  • Monitor, troubleshoot, and resolve issues in data workflows.
  • Maintain documentation for data pipelines, data flows, and data definitions.
  • Follow best practices in data engineering including security, logging, and error handling.

Required Skills & Qualifications

Education:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.

Technical Skills:

  • Strong proficiency in SQL and data manipulation.
  • Hands-on experience with ETL tools (e.g., Informatica, Talend, ADF).
  • Experience with cloud data warehouse platforms such as BigQuery, Redshift, or Snowflake.
  • Strong understanding of data warehousing concepts and data modeling.
  • Proficiency in Python or a similar programming language.
  • Experience working with RDBMS platforms (e.g., SQL Server, Oracle).
  • Familiarity with version control systems and job schedulers.

Experience:

  • 4 to 8 years of relevant experience in data engineering and ETL development.

Soft Skills:

  • Strong problem-solving skills.
  • Excellent communication and collaboration abilities.
  • Ability to work effectively in a cross-functional team environment.


Read more
Verinite Technologies

at Verinite Technologies

3 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Pune
8yrs+
Upto ₹25L / yr (Varies
)
Acquiring
Switch
MS-Excel
VLOOKUP
JIRA
+5 more

About the Company:


Verinite is a global technology consulting and services company laser-focused on the banking & financial services sector, especially in cards, payments, lending, trade, and treasury


They partner with banks, fintechs, payment processors, and other financial institutions to modernize their systems, improve operational resilience, and accelerate digital transformation. Their services include consulting, digital strategy, data, application modernization, quality engineering (testing), cloud & infrastructure, and application maintenance.


Skill – Authorization, Clearing and Settlement

1. Individual should have worked on scheme (Visa, Amex, Discover, Rupay & Mastercard both on authorization or clearing section.

2. Should be able to read scheme specifications and create business requirement/mapping for authorization and Clearing

3. Should have Hands on experience in implementing scheme related changes

4. Should be able to validate the and certify the change post development based on the mapping created

5. Should be able to work with Dev team on explaining and guiding on time-to-time basis.

6. Able to communicate with various teams & senior stakeholders

7. Go getter and great googler

8. Schemes – VISA/MC/AMEX/JCB/CUP/Mercury – Discover and Diners, CBUAE, Jaywan ( Local Scheme from UAE)

9.Experience with Issuing side is plus (good to have).

Read more
Bits In Glass

at Bits In Glass

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune, Mohali, Hyderabad
5 - 8 yrs
Upto ₹32L / yr (Varies
)
skill iconPython
SQL
Data engineering
databricks

We are seeking a highly skilled Senior Data Engineer with expertise in Databricks, Python, Scala, Azure Synapse, and Azure Data Factory to join our data engineering team. The team is responsible for ingesting data from multiple sources, making it accessible to internal stakeholders, and enabling seamless data exchange across internal and external systems.

You will play a key role in enhancing and scaling our Enterprise Data Platform (EDP) hosted on Azure and built using modern technologies such as Databricks, Synapse, Azure Data Factory (ADF), ADLS Gen2, Azure DevOps, and CI/CD pipelines.


Responsibilities

  • Design, develop, optimize, and maintain scalable data architectures and pipelines aligned with ETL principles and business goals.
  • Collaborate across teams to build simple, functional, and scalable data solutions.
  • Troubleshoot and resolve complex data issues to support business insights and organizational objectives.
  • Build and maintain data products to support company-wide usage.
  • Advise, mentor, and coach data and analytics professionals on standards and best practices.
  • Promote reusability, scalability, operational efficiency, and knowledge-sharing within the team.
  • Develop comprehensive documentation for data engineering standards, processes, and capabilities.
  • Participate in design and code reviews.
  • Partner with business analysts and solution architects on enterprise-level technical architectures.
  • Write high-quality, efficient, and maintainable code.

Technical Qualifications

  • 5–8 years of progressive data engineering experience.
  • Strong expertise in Databricks, Python, Scala, and Microsoft Azure services including Synapse & Azure Data Factory (ADF).
  • Hands-on experience with data pipelines across multiple source & target systems (Databricks, Synapse, SQL Server, Data Lake, SQL/NoSQL sources, and file-based systems).
  • Experience with design patterns, code refactoring, CI/CD, and building scalable data applications.
  • Experience developing batch ETL pipelines; real-time streaming experience is a plus.
  • Solid understanding of data warehousing, ETL, dimensional modeling, data governance, and handling both structured and unstructured data.
  • Deep understanding of Synapse and SQL Server, including T-SQL and stored procedures.
  • Proven experience working effectively with cross-functional teams in dynamic environments.
  • Experience extracting, processing, and analyzing large / complex datasets.
  • Strong background in root cause analysis for data and process issues.
  • Advanced SQL proficiency and working knowledge of a variety of database technologies.
  • Knowledge of Boomi is an added advantage.

Core Skills & Competencies

  • Excellent analytical and problem-solving abilities.
  • Strong communication and cross-team collaboration skills.
  • Self-driven with the ability to make decisions independently.
  • Innovative mindset and passion for building quality data solutions.
  • Ability to understand operational systems, identify gaps, and propose improvements.
  • Experience with large-scale data ingestion and engineering.
  • Knowledge of CI/CD pipelines (preferred).
  • Understanding of Python and parallel processing frameworks (MapReduce, Spark, Scala).
  • Familiarity with Agile development methodologies.

Education

  • Bachelor’s degree in Computer Science, Information Technology, MIS, or an equivalent field.


Read more
Bits In Glass

at Bits In Glass

3 candid answers
Nikita Sinha
Posted by Nikita Sinha
Pune, Mohali, Hyderabad
3 - 5 yrs
Upto ₹25L / yr (Varies
)
databricks
SQL
skill iconPython

As a Data Engineer, you will be an integral part of our team, working on data pipelines, data warehousing, and data integration for various analytics and AI use cases. You will collaborate closely with Delivery Managers, ML Engineers and other stakeholders to ensure seamless data flow and accessibility. Your expertise will be crucial in enabling data-driven decision-making for our clients. To thrive in this role, you need to be a quick learner, get excited about innovation and be on the constant lookout to master new technologies as they come up in the Data, AI & Cloud teams.   


Key Responsibilities

  • Design, develop, and maintain scalable data pipelines and ETL processes to support downstream analytics and AI applications.
  • Collaborate with ML Engineers to integrate data solutions into machine learning models and workflows.
  • Work closely with clients to understand their data requirements and deliver tailored data solutions.
  • Ensure data quality, integrity, and security across all projects.
  • Optimize and manage data storage solutions in cloud environments (AWS, Azure, GCP).
  • Utilize Databricks for data processing and analytics tasks, leveraging its capabilities to enhance data workflows.
  • Monitor the performance of data pipelines, identify bottlenecks or failures, and implement improvements to enhance efficiency and reliability.
  • Implement best practices for data engineering, including documentation, testing, and version control.
  • Troubleshoot and resolve data-related issues in a timely manner.


Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
  • 3 to 5 years of experience as a Data Engineer or in a similar role.
  • Strong proficiency in SQL, Python, and other relevant programming languages.
  • Hands-on experience with Databricks and its ecosystem.
  • Familiarity with major cloud environments (AWS, Azure, GCP) and their data services.
  • Experience with data warehousing solutions like Snowflake, Redshift, or BigQuery.
  • Comfortable working with a variety of SQL, NoSQL and graph databases like PostgreSQL and MongoDB;
  • Knowledge of data integration tools.
  • Understanding of data modelling, data architecture, and database design.
  • Excellent problem-solving skills and attention to detail.
  • Strong communication and collaboration skills.


Highly Desirable Skills

  • Experience with real-time data processing frameworks (e.g., Apache Kafka, Spark Streaming).
  • Knowledge of data visualisation tools (e.g., Tableau, Power BI).
  • Familiarity with machine learning concepts and frameworks.
  • Experience working in a client-facing role.
Read more
Ekloud INC
Kratika Agarwal
Posted by Kratika Agarwal
Remote only
8 - 14 yrs
₹6L - ₹14L / yr
m365
MS SharePoint
sharepoint online
ms team
exchange online
+5 more

Requires that any candidate know the M365 Collaboration environment. SharePoint Online, MS Teams. Exchange Online, Entra and Purview. Need developer that possess a strong understanding of Data Structure, Problem Solving abilities, SQL, PowerShell, MS Teams App Development, Python, Visual Basic, C##, JavaScript, Java, HTML, PHP, C.

Read more
Sun King

at Sun King

2 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
1 - 3 yrs
Upto ₹18L / yr (Varies
)
skill iconJava
skill iconSpring Boot
Hibernate (Java)
SQL
Microservices
+3 more

About Sun King

Sun King is the world’s leading off-grid solar energy company, delivering energy access to 1.8 billion people without reliable grid connections through innovative product design, fintech solutions, and field operations.

Key highlights:

  • Connected over 20 million homes to solar power across Africa and Asia, adding 200,000 homes monthly.
  • Affordable ‘pay-as-you-go’ financing model; after 1-2 years, customers own their solar equipment.
  • Saved customers over $4 billion to date.
  • Collect 650,000 daily payments via 28,000 field agents using mobile money systems.
  • Products range from home lighting to high-energy appliances, with expansion into clean cooking, electric mobility, and entertainment.

With 2,800 staff across 12 countries, our team includes experts in various fields, all passionate about serving off-grid communities.


Diversity Commitment:

44% of our workforce are women, reflecting our commitment to gender diversity.


About the role:

The Backend Developer works remotely as part of the technology team to help Sun King’s EasyBuy business unit design and develop software to improve its field team operations.


What you will be expected to do

  • Design and develop applications/systems based on wireframes and product requirements documents. 
  • Design and develop logical and physical data models to meet application requirements. 
  • Identify and resolve bottlenecks and bugs based on operational requirements.
  • Perform unit tests on code to ensure robustness, including edge cases, usability, and general reliability. 
  • Write reusable and easily maintainable code following the principles of DRY (Don’t Repeat Yourself). 
  • Integrate existing tools and business systems, both in-house and external services, such as ticketing software and communication tools. 
  • Collaborate with team members and product managers to understand project requirements and contribute to the overall system design. 

You might be a strong candidate if you have/are

  • Have development experience: 1-3 years backend development experience and have strong problem-solving abilities, proficiency in data structures, and algorithms. 
  • Have a profound grasp of object-oriented programming (OOPS) standards and expertise in Core Java. 
  • Have knowledge of SQL, MySQL, or similar database management. 
  • Have Experience in integrating web services, such as SOAP, REST, JSON, and XML. 
  • Have familiarity with RESTful APIs for linking Android applications to backend services. 
  • Have preferred experience with version control systems like Git, but not mandatory. 
  • Have additional knowledge of web technologies like HTML, CSS, JavaScript, and frameworks like Spring or Hibernate would be advantageous. 

What we offer (in addition to compensation and statutory benefits):

  • A platform for professional growth in a rapidly expanding, high-impact sector.
  • Immerse in a collaborative culture, energized by employees of Sun King who are collectively motivated by fostering a transformative, sustainable venture.
  • A genuinely global environment: Engage and learn alongside a diverse group from varied geographies and backgrounds.
  • Tailored learning pathways through the Sun King Center for Leadership to elevate your leadership and managerial capabilities.
Read more
PayTabs Global

PayTabs Global

Agency job
via AccioJob by AccioJobHiring Board
Remote, Chennai
0 - 1 yrs
₹3.5L - ₹4L / yr
skill iconJava
dsa
Object Oriented Programming (OOPs)
skill iconSpring Boot
SQL

AccioJob is conducting a Walk-In Hiring Drive with PayTabs Global for the position of Java Backend Developer.


To apply, register and select your slot here: https://go.acciojob.com/yU7t3p


Required Skills: Java, DSA, OOPS, Spring Boot, SQL


Eligibility:

  • Degree: MTech./ME, BTech./BE, BCA, MCA
  • Branch: IT, Computer Science/CSE/Other CS related branch, Electrical/Other electrical related branches
  • Graduation Year: 2024, 2025, 2026


Work Details:

  • First 3 months of internship will be Work From Home (WFH)
  • After that, selected candidates must relocate to Chennai
  • CTC: 3.5 LPA to 4 LPA


Evaluation Process:

Round 1: Offline Assessment at AccioJob Bangalore Centre, AccioJob Chennai Centre


Further Rounds (for shortlisted candidates only):

Technical Interview 1, Technical Interview 2, HR Discussion


Important Note: Bring your laptop & earphones for the test.


Register here: https://go.acciojob.com/yU7t3p


FAST SLOT BOOKING

[ DOWNLOAD ACCIOJOB APP ]

https://go.acciojob.com/yZgBhZ

Read more
Oddr Inc
Deepika Madgunki
Posted by Deepika Madgunki
Remote only
2 - 5 yrs
₹5L - ₹15L / yr
BOOMI
iPaaS
SQL
Microsoft Windows Azure
RESTful APIs
+1 more

- Design and implement integration solutions using iPaaS tools.

- Collaborate with customers, product, engineering and business stakeholders to translate business requirements into robust and scalable integration processes.

- Develop and maintain SQL queries and scripts to facilitate data manipulation and integration.

- Utilize RESTful API design and consumption to ensure seamless data flow between various systems and applications.

- Lead the configuration, deployment, and ongoing management of integration projects.

- Troubleshoot and resolve technical issues related to integration solutions.

- Document integration processes and create user guides for internal and external users.

- Stay current with the latest developments in iPaaS technologies and best practices


Qualifications:

- Bachelor’s degree in Computer Science, Information Technology, or a related field.

- Minimum of 3 years’ experience in an integration engineering role with hands-on experience in an iPaaS tool, preferably Boomi.

- Proficiency in SQL and experience with database management and data integration patterns. - Strong understanding of integration patterns and solutions, API design, and cloud-based technologies.

- Good understanding of RESTful APIs and integration.

- Excellent problem-solving and analytical skills.

- Strong communication and interpersonal skills, with the ability to work effectively in a team environment.

- Experience with various integration protocols (REST, SOAP, FTP, etc.) and data formats (JSON, XML, etc.).


Preferred Skills:

- Boomi (or other iPaaS) certifications

- Experience with Intapp's Integration Builder is highly desirable but not mandatory.

- Certifications in Boomi or similar integration platforms.

- Experience with cloud services like MS Azure.

- Knowledge of additional programming languages (e.g., .NET, Java) is advantageous.


What we offer:

- Competitive salary and benefits package.

- Dynamic and innovative work environment.

- Opportunities for professional growth and advancement.


Read more
Auxo AI
kusuma Gullamajji
Posted by kusuma Gullamajji
Bengaluru (Bangalore), Hyderabad, Mumbai, Gurugram
2 - 8 yrs
₹10L - ₹35L / yr
GCP
skill iconPython
SQL
Google Cloud Platform (GCP)

Responsibilities:

Build and optimize batch and streaming data pipelines using Apache Beam (Dataflow)

Design and maintain BigQuery datasets using best practices in partitioning, clustering, and materialized views

Develop and manage Airflow DAGs in Cloud Composer for workflow orchestration

Implement SQL-based transformations using Dataform (or dbt)

Leverage Pub/Sub for event-driven ingestion and Cloud Storage for raw/lake layer data architecture

Drive engineering best practices across CI/CD, testing, monitoring, and pipeline observability

Partner with solution architects and product teams to translate data requirements into technical designs

Mentor junior data engineers and support knowledge-sharing across the team

Contribute to documentation, code reviews, sprint planning, and agile ceremonies

Requirements

2+ years of hands-on experience in data engineering, with at least 2 years on GCP

Proven expertise in BigQuery, Dataflow (Apache Beam), Cloud Composer (Airflow)

Strong programming skills in Python and/or Java

Experience with SQL optimization, data modeling, and pipeline orchestration

Familiarity with Git, CI/CD pipelines, and data quality monitoring frameworks

Exposure to Dataform, dbt, or similar tools for ELT workflows

Solid understanding of data architecture, schema design, and performance tuning

Excellent problem-solving and collaboration skills

Bonus Skills:

GCP Professional Data Engineer certification

Experience with Vertex AI, Cloud Functions, Dataproc, or real-time streaming architectures

Familiarity with data governance tools (e.g., Atlan, Collibra, Dataplex)

Exposure to Docker/Kubernetes, API integration, and infrastructure-as-code (Terraform)

Read more
Deqode

at Deqode

1 recruiter
purvisha Bhavsar
Posted by purvisha Bhavsar
Pune, Gurugram, Bhopal, Jaipur, Bengaluru (Bangalore)
2 - 4 yrs
₹5L - ₹12L / yr
Windows Azure
SQL
Data Structures
databricks

 Hiring: Azure Data Engineer

⭐ Experience: 2+ Years

📍 Location: Pune, Bhopal, Jaipur, Gurgaon, Bangalore

⭐ Work Mode:- Hybrid

⏱️ Notice Period: Immediate Joiners

Passport: Mandatory & Valid

(Only immediate joiners & candidates serving notice period)


Mandatory Skills:

Azure Synapse, Azure Databricks, Azure Data Factory (ADF), SQL, Delta Lake, ADLS, ETL/ELT,Pyspark .


Responsibilities:

  • Build and maintain data pipelines using ADF, Databricks, and Synapse.
  • Develop ETL/ELT workflows and optimize SQL queries.
  • Implement Delta Lake for scalable lakehouse architecture.
  • Create Synapse data models and Spark/Databricks notebooks.
  • Ensure data quality, performance, and security.
  • Collaborate with cross-functional teams on data requirements.


Nice to Have:

Azure DevOps, Python, Streaming (Event Hub/Kafka), Power BI, Azure certifications (DP-203).


Read more
Ekloud INC
Seema KK
Posted by Seema KK
Remote only
8 - 12 yrs
₹23L - ₹25L / yr
skill iconJava
skill iconSpring Boot
Microservices
skill iconReact.js
SQL
+9 more

Job Description:

Technical Lead – Full Stack

Experience: 8–12 years (Strong candidates Java 50% - React 50%)

Location – Bangalore/Hyderabad

Interview Levels – 3 Rounds

Tech Stack: Java, Spring Boot, Microservices, React, SQL

Focus: Hands-on coding, solution design, team leadership, delivery ownership

 

Must-Have Skills (Depth)

Java (8+): Streams, concurrency, collections, JVM internals (GC), exception handling.

Spring Boot: Security, Actuator, Data/JPA, Feign/RestTemplate, validation, profiles, configuration management.

Microservices: API design, service discovery, resilience patterns (Hystrix/Resilience4j), messaging (Kafka/RabbitMQ) optional.

React: Hooks, component lifecycle, state management, error boundaries, testing (Jest/RTL).

SQL: Joins, aggregations, indexing, query optimization, transaction isolation, schema design.

Testing: JUnit/Mockito for backend; Jest/RTL/Cypress for frontend.

DevOps: Git, CI/CD, containers (Docker), familiarity with deployment environments.

Read more
lulu international

lulu international

Agency job
via Episeio Business Solutions by Praveen Saulam
Bengaluru (Bangalore)
2.5 - 3 yrs
₹7L - ₹9L / yr
SQL
PySpark
databricks
Hypothesis testing
ANOVA gauge R&R

Role Overview

As a Lead Data Scientist / Data Analyst, you’ll combine analytical thinking, business acumen, and technical expertise to design and deliver impactful data-driven solutions. You’ll lead analytical problem-solving for retail clients — from data exploration and visualisation to predictive modelling and actionable business insights.

 

Key Responsibilities

• Partner with business stakeholders to understand problems and translate them into analytical solutions.

• Lead end-to-end analytics projects — from hypothesis framing and data wrangling to insight delivery and model implementation.

• Drive exploratory data analysis (EDA), identify patterns/trends, and derive meaningful business stories from data.

• Design and implement statistical and machine learning models (e.g., segmentation, propensity, CLTV, price/promo optimisation).

• Build and automate dashboards, KPI frameworks, and reports for ongoing business monitoring.

• Collaborate with data engineering and product teams to deploy solutions in production environments.

• Present complex analyses in a clear, business-oriented way, influencing decision-making across retail categories.

• Promote an agile, experiment-driven approach to analytics delivery.

 

Common Use Cases You’ll Work On

• Customer segmentation (RFM, mission-based, behavioural)

• Price and promo effectiveness

• Assortment and space optimisation

• CLTV and churn prediction

• Store performance analytics and benchmarking

• Campaign measurement and targeting

• Category in-depth reviews and presentation to the L1 leadership team

 

Required Skills and Experience

• 3+ years of experience in data science, analytics, or consulting (preferably in the retail domain)

• Proven ability to connect business questions to analytical solutions and communicate insights effectively

• Strong SQL skills for data manipulation and querying large datasets

• Advanced Python for statistical analysis, machine learning, and data processing

• Intermediate PySpark / Databricks skills for working with big data

• Comfortable with data visualisation tools (Power BI, Tableau, or similar)

• Knowledge of statistical techniques (Hypothesis testing, ANOVA, regression, A/B testing, etc.)

• Familiarity with agile project management tools (JIRA, Trello, etc.)

 

Good to Have

• Experience designing data pipelines or analytical workflows in cloud environments (Azure preferred)

• Strong understanding of retail KPIs (sales, margin, penetration, conversion, ATV, UPT, etc.)

• Prior exposure to Promotion or Pricing analytics 

• Dashboard development or reporting automation expertise

 

Read more
a large software company

a large software company

Agency job
via AccioJob by AccioJobHiring Board
Chennai
0 - 1 yrs
₹4L - ₹5L / yr
skill iconJava
skill iconSpring Boot
Object Oriented Programming (OOPs)
DSA
SQL

AccioJob is conducting a Walk-In Hiring Drive with a large software company for the position of Java Backend Developer.


To apply, register and select your slot here: https://go.acciojob.com/7mGZE7


Required Skills: Java, Spring Boot, OOPS, DSA, SQL


Eligibility:

  • Degree: BTech./BE, MTech./ME
  • Branch: Computer Science/CSE/Other CS related branch, IT
  • Graduation Year: 2024, 2025, 2026


Work Details:

  • Work Location: Chennai (Onsite)
  • CTC: 4 LPA to 5 LPA


Evaluation Process:

Round 1: Offline Assessment at AccioJob Chennai Centre


Further Rounds (for shortlisted candidates only):

  • Resume Evaluation
  • Technical Interview 1
  • Technical Interview 2


Important Note: Bring your laptop & earphones for the test.


Register here: https://go.acciojob.com/7mGZE7


FAST SLOT BOOKING

DOWNLOAD ACCIOJOB APP ]

https://go.acciojob.com/CyU7zD

Read more
Neuvamacro Technology Pvt Ltd
Remote only
5 - 10 yrs
₹13L - ₹18L / yr
PowerBI
Office 365
Microsoft Dynamics
skill iconAmazon Web Services (AWS)
skill iconJavascript
+10 more

We are seeking a highly skilled Power Platform Developer with deep expertise in designing, developing, and deploying solutions using Microsoft Power Platform. The ideal candidate will have strong knowledge of Power Apps, Power Automate, Power BI, Power Pages, and Dataverse, along with integration capabilities across Microsoft 365, Azure, and third-party systems.


Key Responsibilities

  • Solution Development:
  • Design and build custom applications using Power Apps (Canvas & Model-Driven).
  • Develop automated workflows using Power Automate for business process optimization.
  • Create interactive dashboards and reports using Power BI for data visualization and analytics.
  • Configure and manage Dataverse for secure data storage and modelling.
  • Develop and maintain Power Pages for external-facing portals.
  • Integration & Customization:
  • Integrate Power Platform solutions with Microsoft 365, Dynamics 365, Azure services, and external APIs.
  • Implement custom connectors and leverage Power Platform SDK for advanced scenarios.
  • Utilize Azure Functions, Logic Apps, and REST APIs for extended functionality.
  • Governance & Security:
  • Apply best practices for environment management, ALM (Application Lifecycle Management), and solution deployment.
  • Ensure compliance with security, data governance, and licensing guidelines.
  • Implement role-based access control and manage user permissions.
  • Performance & Optimization:
  • Monitor and optimize app performance, workflow efficiency, and data refresh strategies.
  • Troubleshoot and resolve technical issues promptly.
  • Collaboration & Documentation:
  • Work closely with business stakeholders to gather requirements and translate them into technical solutions.
  • Document architecture, workflows, and processes for maintainability.


Required Skills & Qualifications

  • Technical Expertise:
  • Strong proficiency in Power Apps (Canvas & Model-Driven)Power AutomatePower BIPower Pages, and Dataverse.
  • Experience with Microsoft 365, Dynamics 365, and Azure services.
  • Knowledge of JavaScript, TypeScript, C#, .NET, and Power Fx for custom development.
  • Familiarity with SQL, DAX, and data modeling.
  • Additional Skills:
  • Understanding of ALM practicessolution packaging, and deployment pipelines.
  • Experience with Git, Azure DevOps, or similar tools for version control and CI/CD.
  • Strong problem-solving and analytical skills.
  • Certifications (Preferred):
  • Microsoft Certified: Power Platform Developer Associate.
  • Microsoft Certified: Power Platform Solution Architect Expert.


Soft Skills

  • Excellent communication and collaboration skills.
  • Ability to work in agile environments and manage multiple priorities.
  • Strong documentation and presentation abilities.

 

Read more
Upland Software

at Upland Software

4 candid answers
2 recruiters
Bisman Gill
Posted by Bisman Gill
Remote only
5yrs+
Upto ₹33L / yr (Varies
)
skill icon.NET
SQL
Object Oriented Programming (OOPs)
Windows Azure
ASP.NET
+1 more

We are looking for an enthusiastic and dynamic individual to join Upland India as a Senior Software Engineer I (Backend) for our Panviva product. The individual will work with our global development team.


What would you do?

  • Develop, Review, test and maintain application code
  • Collaborating with other developers and product to fulfil objectives
  • Troubleshoot and diagnose issues
  • Take lead on tasks as needed
  • Jump in and help the team deliver features when it is required

What are we looking for?

Experience

  • 5 + years of experience in Designing and implementing application architecture
  • Back-end developer who enjoys solving problems
  • Demonstrated experience with the .NET ecosystem (.NET Framework, ASP.NET, .NET Core) & SQL server
  • Experience in building cloud-native applications (Azure)
  • Must be skilled at writing Quality, scalable, maintainable, testable code

Leadership Skills

  • Strong communication skills
  • Ability to mentor/lead junior developers


Primary Skills: The candidate must possess the following primary skills:

  • Strong Back-end developer who enjoys solving problems
  • Solid experience NET Core, SQL Server, and .Net Design patterns such as Strong Understanding of OOPs Principles, .net specific implementation (DI/CQRS/Repository etc., patterns) & Knowing Architectural Solid principles, Unit testing tools, Debugging techniques
  • Applying patterns to improve scalability and reduce technical debt
  • Experience with refactoring legacy codebases using design patterns
  • Real-World Problem Solving
  • Ability to analyze a problem and choose the most suitable design pattern
  • Experience balancing performance, readability, and maintainability
  • Experience building modern, scalable, reliable applications on the MS Azure cloud including services such as:
  • App Services
  • Azure Service Bus/ Event Hubs
  • Azure API Management Service Azure Bot Service
  • Function/Logic Apps
  • Azure key vault & Azure Configuration Service
  • CosmosDB, Mongo DB
  • Azure Search
  • Azure Cognitive Services

Understanding Agile Methodology and Tool Familiarity

  • Solid understanding of Agile development processes, including sprint planning, daily stand-ups, retrospectives, and backlog grooming
  • Familiarity with Agile tools such as JIRA for tracking tasks, managing workflows, and collaborating across teams
  • Experience working in cross-functional Agile teams and contributing to iterative development cycles

Secondary Skills: It would be advantageous if the candidate also has the following secondary skills:

  • Experience with front-end React/Jquery/Javascript, HTML and CSS Frameworks
  • APM tools - Worked on any tools such as Grafana, NR, Cloudwatch etc.,
  • Basic Understanding of AI models
  • Python

About Upland

Upland Software (Nasdaq: UPLD) helps global businesses accelerate digital transformation with a powerful cloud software library that provides choice, flexibility, and value. Upland India is a fully owned subsidiary of Upland Software and headquartered in Bangalore. We are a remote-first company. Interviews and on-boarding are conducted virtually.


Read more
Techno Wise
Ishita Panwar
Posted by Ishita Panwar
Pune, Ahmedabad
1 - 10 yrs
₹5.5L - ₹12L / yr
Lead Generation
B2B Marketing
International sales
Inside Sales
Communication Skills
+5 more

Position: Sales Development Representative (International Voice Process)


Job Responsibilities

● Making multiple outbound calls to assigned B2B prospects. Develop sales opportunities by researching the prospective company, using influencing and relationship-building skills, and providing information on the client's product/value proposition.

● Ability to understand the key objections from prospects, clarify their concerns & use product knowledge & vendor-led training to alleviate these concerns & move the sales cycle forward. Persistently follow up with the prospect in a clear & timely manner to ensure positive outcomes.

● Understand customer campaigns, and their products/services and appropriately communicate customer brand identity to prospects. Provide detailed and concise feedback to the Voice Operations leads on the outcomes (conversions/rejects / not interested etc.).

● Undertakepre-sales outreach processes such as AG, HQL, SQL, BANT, marketing, and sales lead qualification.


Requirements:

● Minimum2yearsexperience in B2B sales, ideally selling to technology stakeholders, and senior stakeholders including C-suite within the enterprise and SMB organizations with a solid track record of lead conversions via outbound calling and emails.

● Been part of marketing and sales lead generation campaign teams focusing on SQL, MQL, BANT, AG, etc.

● Excellent verbal communication and convincing skills; should be able to think on their feet and provide effective rebuttals/responses to prospects on the calls.

● Strong track record of meeting their targets for SQL / MQL / BANT/ AG campaigns.

• Should be self-motivated, energetic, able to work in a dynamic environment focusing on outcomes, and demonstrate a high level of resilience.

● Ago-getter, and collaborator who is keen to learn and is highly receptive to feedback.

Read more
Hyderabad, Bengaluru (Bangalore)
5 - 12 yrs
₹25L - ₹35L / yr
skill iconC#
SQL
skill iconAmazon Web Services (AWS)
skill icon.NET
skill iconJava
+3 more

Senior Software Engineer

Location: Hyderabad, India


Who We Are:

Since our inception back in 2006, Navitas has grown to be an industry leader in the digital transformation space, and we’ve served as trusted advisors supporting our client base within the commercial, federal, and state and local markets.


What We Do:

At our very core, we’re a group of problem solvers providing our award-winning technology solutions to drive digital acceleration for our customers! With proven solutions, award-winning technologies, and a team of expert problem solvers, Navitas has consistently empowered customers to use technology as a competitive advantage and deliver cutting-edge transformative solutions.


What You’ll Do:

Build, Innovate, and Own:

  • Design, develop, and maintain high-performance microservices in a modern .NET/C# environment.
  • Architect and optimize data pipelines and storage solutions that power our AI-driven products.
  • Collaborate closely with AI and data teams to bring machine learning models into production systems.
  • Build integrations with external services and APIs to enable scalable, interoperable solutions.
  • Ensure robust security, scalability, and observability across distributed systems.
  • Stay ahead of the curve — evaluating emerging technologies and contributing to architectural decisions for our next-gen platform.

Responsibilities will include but are not limited to:

  • Provide technical guidance and code reviews that raise the bar for quality and performance.
  • Help create a growth-minded engineering culture that encourages experimentation, learning, and accountability.

What You’ll Need:

  • Bachelor’s degree in Computer Science or equivalent practical experience.
  • 8+ years of professional experience, including 5+ years designing and maintaining scalable backend systems using C#/.NET and microservices architecture.
  • Strong experience with SQL and NoSQL data stores.
  • Solid hands-on knowledge of cloud platforms (AWS, GCP, or Azure).
  • Proven ability to design for performance, reliability, and security in data-intensive systems.
  • Excellent communication skills and ability to work effectively in a global, cross-functional environment.

Set Yourself Apart With:

  • Startup experience - specifically in building product from 0-1
  • Exposure to AI/ML-powered systems, data engineering, or large-scale data processing.
  • Experience in healthcare or fintech domains.
  • Familiarity with modern DevOps practices, CI/CD pipelines, and containerization (Docker/Kubernetes).

Equal Employer/Veterans/Disabled

Navitas Business Consulting is an affirmative action and equal opportunity employer. If reasonable accommodation is needed to participate in the job application or interview process, to perform essential job functions, and/or to receive other benefits and privileges of employment, please contact Navitas Human Resources.

Navitas is an equal opportunity employer. We provide employment and opportunities for advancement, compensation, training, and growth according to individual merit, without regard to race, color, religion, sex (including pregnancy), national origin, sexual orientation, gender identity or expression, marital status, age, genetic information, disability, veteran-status veteran or military status, or any other characteristic protected under applicable Federal, state, or local law. Our goal is for each staff member to have the opportunity to grow to the limits of their abilities and to achieve personal and organizational objectives. We will support positive programs for equal treatment of all staff and full utilization of all qualified employees at all levels within Navita

Read more
venanalytics

at venanalytics

2 candid answers
Rincy jain
Posted by Rincy jain
Remote, Mumbai
3 - 4 yrs
₹7L - ₹10L / yr
skill iconPython
SQL
PowerBI
Client Servicing
Team Management
+6 more

About Ven Analytics


At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.


Role Overview


We’re looking for a Power BI Data Analyst who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL..


Key Responsibilities


  • Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.


  • Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.


  • Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.


  • Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.


  • Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.


  • Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.


  • Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.


  • Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.


  • Power BI Development: Use power BI desktop for report building and service for distribution 


  • Backend development: Develop optimized SQL queries that are easy to consume, maintain and debug.


  • Version Control: Strict control on versions by tracking CRs and Bugfixes. Ensuring the maintenance of Prod and Dev dashboards. 


  • Client Servicing : Engage with clients to understand their data needs, gather requirements, present insights, and ensure timely, clear communication throughout project cycles.


  • Team Management : Lead and mentor a small team by assigning tasks, reviewing work quality, guiding technical problem-solving, and ensuring timely delivery of dashboards and reports..


Must-Have Skills


  • Strong experience building robust data models in Power BI
  • Hands-on expertise with DAX (complex measures and calculated columns)
  • Proficiency in M Language (Power Query) beyond drag-and-drop UI
  • Clear understanding of data visualization best practices (less fluff, more insight)
  • Solid grasp of SQL and Python for data processing
  • Strong analytical thinking and ability to craft compelling data stories
  • Client Servicing Background.


Good-to-Have (Bonus Points)


  • Experience using DAX Studio and Tabular Editor
  • Prior work in a high-volume data processing production environment
  • Exposure to modern CI/CD practices or version control with BI tools

 

Why Join Ven Analytics?


  • Be part of a fast-growing startup that puts data at the heart of every decision.
  • Opportunity to work on high-impact, real-world business challenges.
  • Collaborative, transparent, and learning-oriented work environment.
  • Flexible work culture and focus on career development.


Read more
Ekloud INC
Kratika Agarwal
Posted by Kratika Agarwal
Remote only
8 - 14 yrs
₹7L - ₹18L / yr
m365
m365 developer
ms teams
MS SharePoint
Microsoft Exchange
+12 more

Requires that any candidate know the M365 Collaboration environment. SharePoint Online, MS Teams. Exchange Online, Entra and Purview. Need developer that possess a strong understanding of Data Structure, Problem Solving abilities, SQL, PowerShell, MS Teams App Development, Python, Visual Basic, C##, JavaScript, Java, HTML, PHP, C.

Need a strong understanding of the development lifecycle, and possess debugging skills time management, business acumen, and have a positive attitude is a must and open to continual growth.

Capability to code appropriate solutions will be tested in any interview.

Knowledge of a wide variety of Generative AI models

Conceptual understanding of how large language models work

Proficiency in coding languages for data manipulation (e.g., SQL) and machine learning & AI development (e.g., Python)

Experience with dashboarding tools such as Power BI and Tableau (beneficial but not essential)

Read more
Whiz IT Services
Sheeba Harish
Posted by Sheeba Harish
Remote only
10 - 15 yrs
₹20L - ₹20L / yr
skill iconJava
skill iconSpring Boot
Microservices
API
Apache Kafka
+5 more

We are looking for highly experienced Senior Java Developers who can architect, design, and deliver high-performance enterprise applications using Spring Boot and Microservices . The role requires a strong understanding of distributed systems, scalability, and data consistency.

Read more
Sonatype

at Sonatype

5 candid answers
Reshika Mendiratta
Posted by Reshika Mendiratta
Hyderabad
5 - 8 yrs
Upto ₹28L / yr (Varies
)
skill iconJava
ETL
Spring
databricks
SQL
+5 more

Who We Are

At Sonatype, we help organizations build better, more secure software by enabling them to understand and control their software supply chains. Our products are trusted by thousands of engineering teams globally, providing critical insights into dependency health, license risk, and software security. We’re passionate about empowering developers—and we back it with data.


The Opportunity

We’re looking for a Data Engineer with full stack expertise to join our growing Data Platform team. This role blends data engineering, microservices, and full-stack development to deliver end-to-end services that power analytics, machine learning, and advanced search across Sonatype.

You will design and build data-driven microservices and workflows using Java, Python, and Spring Batch, implement frontends for data workflows, and deploy everything through CI/CD pipelines into AWS ECS/Fargate. You’ll also ensure services are monitorable, debuggable, and reliable at scale, while clearly documenting designs with Mermaid-based sequence and dataflow diagrams.

This is a hands-on engineering role for someone who thrives at the intersection of data systems, fullstack development, ML, and cloud-native platforms.


What You’ll Do

  • Design, build, and maintain data pipelines, ETL/ELT workflows, and scalable microservices.
  • Development of complex web scraping (Playwright) and realtime pipelines (Kafka/Queues/Flink).
  • Develop end-to-end microservices with backend (Java 5+, Python 5+, Spring Batch 2+) and frontend (React or any).
  • Deploy, publish, and operate services in AWS ECS/Fargate using CI/CD pipelines (Jenkins, GitOps).
  • Architect and optimize data storage models in SQL (MySQL, PostgreSQL) and NoSQL stores.
  • Implement web scraping and external data ingestion pipelines.
  • Enable Databricks and PySpark-based workflows for large-scale analytics.
  • Build advanced data search capabilities (fuzzy matching, vector similarity search, semantic retrieval).
  • Apply ML techniques (scikit-learn, classification algorithms, predictive modeling) to data-driven solutions.
  • Implement observability, debugging, monitoring, and alerting for deployed services.
  • Create Mermaid sequence diagrams, flowcharts, and dataflow diagrams to document system architecture and workflows.
  • Drive best practices in fullstack data service development, including architecture, testing, and documentation.


What We’re Looking For


Minimum Qualifications

  • 2+ years of experience as a Data Engineer or a Software Backend engineering role
  • Strong programming skills in Python, Scala, or Java
  • Hands-on experience with HBase or similar NoSQL columnar stores
  • Hands-on experience with distributed data systems like Spark, Kafka, or Flink
  • Proficient in writing complex SQL and optimizing queries for performance
  • Experience building and maintaining robust ETL/ELT pipelines in production
  • Familiarity with workflow orchestration tools (Airflow, Dagster, or similar)
  • Understanding of data modeling techniques (star schema, dimensional modeling, etc.)
  • Familiarity with CI/CD pipelines (Jenkins or similar)
  • Ability to visualize and communicate architectures using Mermaid diagrams

Bonus Points

  • Experience working with Databricks, dbt, Terraform, or Kubernetes
  • Familiarity with streaming data pipelines or real-time processing
  • Exposure to data governance frameworks and tools
  • Experience supporting data products or ML pipelines in production
  • Strong understanding of data privacy, security, and compliance best practices


Why You’ll Love Working Here

  • Data with purpose: Work on problems that directly impact how the world builds secure software
  • Modern tooling: Leverage the best of open-source and cloud-native technologies
  • Collaborative culture: Join a passionate team that values learning, autonomy, and impact
Read more
Chennai
0 - 0 yrs
₹2.5L - ₹3L / yr
skill iconPython
skill iconJava
skill iconJavascript
SQL
skill iconGit
+3 more

We are seeking enthusiastic and motivated fresh graduates with a strong foundation in programming, primarily in Python, and basic knowledge of Java, C#, or JavaScript. This role offers hands-on experience in developing applications, writing clean code, and collaborating on real-world projects under expert guidance.


Key Responsibilities

• Develop and maintain applications using Python as the primary language.

• Assist in coding, debugging, and testing software modules in Java, C#, or JavaScript as needed.

• Collaborate with senior developers to learn best practices and contribute to project deliverables.

• Write clean, efficient, and well-documented code.

• Participate in code reviews and follow standard development processes.

• Continuously learn and adapt to new technologies and frameworks.


Core Expectations

• Eagerness to Learn: Open to acquiring new programming skills and frameworks.

• Adaptability: Ability to work across multiple languages and environments.

• Problem-Solving: Strong analytical skills to troubleshoot and debug issues.

• Team Collaboration: Work effectively with peers and seniors.

• Professionalism: Good communication skills and a positive attitude.


Qualifications

• Bachelor’s degree in Computer Science, IT, or related field.

• Strong understanding of Python (OOP, data structures, basic frameworks like Flask/Django).

• Basic knowledge of Java, C#, or JavaScript.

• Familiarity with version control systems (Git).

• Understanding of databases (SQL/NoSQL) is a plus.

NOTE: Laptop with high speed internet is mandatory

Read more
Global Digital Transformation Solutions Provider

Global Digital Transformation Solutions Provider

Agency job
via Peak Hire Solutions by Dhara Thakkar
Pune
6 - 12 yrs
₹25L - ₹30L / yr
skill iconMachine Learning (ML)
AWS CloudFormation
Online machine learning
skill iconAmazon Web Services (AWS)
ECS
+20 more

MUST-HAVES: 

  • Machine Learning + Aws + (EKS OR ECS OR Kubernetes) + (Redshift AND Glue) + Sage maker
  • Notice period - 0 to 15 days only 
  • Hybrid work mode- 3 days office, 2 days at home


SKILLS: AWS, AWS CLOUD, AMAZON REDSHIFT, EKS


ADDITIONAL GUIDELINES:

  • Interview process: - 2 Technical round + 1 Client round
  • 3 days in office, Hybrid model. 


CORE RESPONSIBILITIES:

  • The MLE will design, build, test, and deploy scalable machine learning systems, optimizing model accuracy and efficiency
  • Model Development: Algorithms and architectures span traditional statistical methods to deep learning along with employing LLMs in modern frameworks.
  • Data Preparation: Prepare, cleanse, and transform data for model training and evaluation.
  • Algorithm Implementation: Implement and optimize machine learning algorithms and statistical models.
  • System Integration: Integrate models into existing systems and workflows.
  • Model Deployment: Deploy models to production environments and monitor performance.
  • Collaboration: Work closely with data scientists, software engineers, and other stakeholders.
  • Continuous Improvement: Identify areas for improvement in model performance and systems.


SKILLS:

  • Programming and Software Engineering: Knowledge of software engineering best practices (version control, testing, CI/CD).
  • Data Engineering: Ability to handle data pipelines, data cleaning, and feature engineering. Proficiency in SQL for data manipulation + Kafka, Chaos search logs, etc. for troubleshooting; Other tech touch points are Scylla DB (like BigTable), OpenSearch, Neo4J graph
  • Model Deployment and Monitoring: MLOps Experience in deploying ML models to production environments.
  • Knowledge of model monitoring and performance evaluation.


REQUIRED EXPERIENCE:

  • Amazon SageMaker: Deep understanding of SageMaker's capabilities for building, training, and deploying ML models; understanding of the Sage maker pipeline with ability to analyze gaps and recommend/implement improvements
  • AWS Cloud Infrastructure: Familiarity with S3, EC2, Lambda and using these services in ML workflows
  • AWS data: Redshift, Glue
  • Containerization and Orchestration: Understanding of Docker and Kubernetes, and their implementation within AWS (EKS, ECS)
Read more
Hashone Careers

at Hashone Careers

2 candid answers
Madhavan I
Posted by Madhavan I
Mumbai
5 - 8 yrs
₹12L - ₹24L / yr
Data engineering
skill iconPython
SQL

Job Description

Location: Mumbai (with short/medium-term travel opportunities within India & foreign location)

Experience: 5 -8 years

Job Type: Full-time

About the Role

We are looking for experienced data engineers who can independently build, optimize, and manage scalable data pipelines and platforms. In this role, you’ll work closely with clients and internal teams to deliver robust data solutions that power analytics, AI/ML, and operational systems. You’ll also help mentor junior engineers and bring engineering discipline into our data engagements.

Key Responsibilities

Design, build, and optimize large-scale, distributed data pipelines for both batch and streaming use cases.


Implement scalable data models, data warehouses/lakehouses, and data lakes to support analytics and decision-making.


Collaborate with cross-functional stakeholders to understand business requirements and translate them into technical data solutions.


Drive performance tuning, monitoring, and reliability of data pipelines.


Write clean, modular, and production-ready code with proper documentation and testing.


Contribute to architectural discussions, tool evaluations, and platform setup.


Mentor junior engineers and participate in code/design reviews.


Must-Have Skills

Strong programming skills in Python and advanced SQL expertise.


Deep understanding of data engineering concepts such as ETL/ELT, data modeling (OLTP & OLAP), warehousing, and stream processing.


Experience with distributed data processing frameworks (e.g., Apache Spark, Flink, or similar).


Exposure with Java is mandate


Experience with building pipelines using orchestration tools like Airflow or similar.


Familiarity with CI/CD pipelines and version control tools like Git.


Ability to debug, optimize, and scale data pipelines in real-world settings.


Good to Have

Experience working on any major cloud platform (AWS preferred; GCP or Azure also welcome).


Exposure to Databricks, dbt, or similar platforms is a plus.


Experience with Snowflake is preferred.


Understanding of data governance, data quality frameworks, and observability.


Certification in AWS (e.g., Data Analytics, Solutions Architect) or Databricks is a plus.


Other Expectations

Comfortable working in fast-paced, client-facing environments.


Strong analytical and problem-solving skills with attention to detail.


Ability to adapt across tools, stacks, and business domains.


Willingness to travel within India for short/medium-term client engagements as needed.



Read more
Nerve Solutions
Srishti Shetty
Posted by Srishti Shetty
Mumbai
0 - 4 yrs
₹1L - ₹8L / yr
skill iconC#
skill iconJava
SQL

We are looking for a passionate and skilled Software Developer to join our team. The role involves designing and developing new features, integrating APIs, and ensuring the overall quality of our product. Along with hands-on coding, you will also be responsible for testing, validating, and publishing releases, while continuously working on improving development processes.


Roles & Responsibilities : 

  • Design, develop, and implement new features to enhance product functionality.
  • Work on API integrations to support business and product requirements.
  • Conduct development testing to ensure functionality, performance, and reliability.
  • Prepare and publish product releases after thorough validation.
  • Continuously improve development workflows, code quality, and efficiency.
  • Provide technical assistance and troubleshoot issues faced by the team.
  • Collaborate with cross-functional teams to ensure smooth project execution.
  • Contribute to knowledge sharing and provide mentoring support to team members.


Required Skills & Qualifications : 

  • Strong proficiency in web development technologies (C#, C++, MongoDB, etc.).
  • Hands-on experience in API development and integration.
  • Solid understanding of development testing methodologies.
  • Proficiency in version control systems (e.g., Git).
  • Strong analytical and problem-solving skills with keen attention to detail.
  • Excellent communication skills with the ability to work effectively in a team.
  • Self-motivated, proactive, and capable of working independently.
  • Bachelor’s degree in Computer Science, Engineering, or related field
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Praffull Shinde
Posted by Praffull Shinde
Bengaluru (Bangalore)
3 - 6 yrs
Best in industry
Google Ads
Digital Marketing
Search Engine Optimization (SEO)
SQL
skill iconGoogle Analytics

This role focused entirely on the manual implementation of changes within the Google Ads platform, based on detailed input provided by Client Business team. The ideal candidate thrives in an execution-focused role, ensuring accuracy and efficiency in translating strategic input into platform updates.

 

Job Description:

 

Key Responsibilities

The primary focus of this role is the accurate and timely execution of changes and reporting across multiple Google Ads accounts, specifically involving    

  1. Execution of Campaign Strategies & Quality Assurance:

This involves the manual and precise implementation of all specified campaign levers based on provided input files, with a strict focus on accuracy:

o  Campaign Implementation: Manual execution and verification of changes related to campaign structure, assets, audience signals, search themes, location targets, bid adjustments, negative keywords, etc. based on detailed input files.

o  Conversion Tracking Implementation: Uploading Google Click IDs (GCLIDs) to the Google Ads platform for offline conversion tracking and measurement, ensuring correct mapping and adherence to privacy protocols.

o  Data Translation & Accuracy: Meticulously reviewing and translating inputs from Excel/Google Sheets into the live Google Ads UI, ensuring zero-error implementation.

o  Quality Assurance: Maintaining a comprehensive log of all executed changes for quality assurance and audit purposes.


     2.  Performance Tracking and Reporting:

 This involves the collection, processing, and presentation of campaign data to support business decisions:

o  Campaign Monitoring: Tracking overall campaign details, performance metrics, and budgets on a daily or weekly basis to ensure operational health.

o  Manual Report Creation: Creating and updating manual performance reports for dealers and internal stakeholders using Google Ads UI reports or internal databases using SQL, ensuring data accuracy and timely delivery.


Required Qualifications

o  Experience: Minimum of 1+ years of hands-on, practical experience exclusively executing changes within the Google Ads UI and Google Analytics/ Tags Manager across multiple accounts. 3 to 5 years of experience overall.

o  Platform Proficiency: Working knowledge of the Google Ads UI, including the ability to navigate complex account structures and features (specifically Performance Max campaigns).

o  Technical Data Expertise: Demonstrated ability to work with large data sets and perform manual data pulls using:

§ SQL (Expertise Required): Ability to execute queries for report generation and data verification.

§ Excel/Google Sheets: Advanced proficiency for data manipulation, cleaning, and presentation.

o  Attention to Detail: Exceptional focus on detail and accuracy is non-negotiable. Must be capable of translating detailed spreadsheet data into the live platform without errors.

o  Process Driven: Proven ability to follow standardized, step-by-step instructions and processes efficiently and consistently.

o  Communication: Clear written communication skills to report on task status and flag any discrepancies in input files. It will be a differentiator if they also add value by pushing and pulling using their existing knowledge and help in doing the right thing.


Preferred Qualifications

o  Prior experience working on high-volume, highly complex Google Ads accounts.

o  Understanding of the purpose behind the executed tasks (e.g., how negative keywords impact campaigns, the function of Audience Signals).

Read more
Arcitech
Arcitech HR Department
Posted by Arcitech HR Department
Navi Mumbai
2 - 5 yrs
₹5L - ₹12L / yr
Large Language Models (LLM)
Retrieval Augmented Generation (RAG)
PyTorch
Keras
pytest
+4 more

Python Developer - AI/MLYour Responsibilities

  • Develop, train, and optimize ML models using PyTorch, TensorFlow, and Keras.
  • Build end-to-end LLM and RAG pipelines using LangChain and LangGraph.
  • Work with LLM APIs (OpenAI, Anthropic Claude, Azure OpenAI) and implement prompt engineering strategies.
  • Utilize Hugging Face Transformers for model fine-tuning and deployment.
  • Integrate embedding models for semantic search and retrieval systems.
  • Work with transformer-based architectures (BERT, GPT, LLaMA, Mistral) for production use cases.
  • Implement LLM evaluation frameworks (RAGAS, LangSmith) and performance optimization.
  • Design and maintain Python microservices using FastAPI with REST/GraphQL APIs.
  • Implement real-time communication with FastAPI WebSockets.
  • Implement pgvector for embedding storage and similarity search with efficient indexing strategies.
  • Integrate vector databases (pgvector, Pinecone, Weaviate, FAISS, Milvus) for retrieval pipelines.
  • Containerize AI services with Docker and deploy on Kubernetes (EKS/GKE/AKS).
  • Configure AWS infrastructure (EC2, S3, RDS, SageMaker, Lambda, CloudWatch) for AI/ML workloads.
  • Version ML experiments using MLflow, Weights & Biases, or Neptune.
  • Deploy models using serving frameworks (TorchServe, BentoML, TensorFlow Serving).
  • Implement model monitoring, drift detection, and automated retraining pipelines.
  • Build CI/CD pipelines for automated testing and deployment with ≥80% test coverage (pytest).
  • Follow security best practices for AI systems (prompt injection prevention, data privacy, API key management).
  • Participate in code reviews, tech talks, and AI learning sessions.
  • Follow Agile/Scrum methodologies and Git best practices.

Required Qualifications

  • Bachelor's or Master's degree in Computer Science, AI/ML, or related field.
  • 2–5 years of Python development experience (Python 3.9+) with strong AI/ML background.
  • Hands-on experience with LangChain and LangGraph for building LLM-powered workflows and RAG systems.
  • Deep learning experience with PyTorch or TensorFlow.
  • Experience with Hugging Face Transformers and model fine-tuning.
  • Proficiency with LLM APIs (OpenAI, Anthropic, Azure OpenAI) and prompt engineering.
  • Strong experience with FastAPI frameworks.
  • Proficiency in PostgreSQL with pgvector extension for embedding storage and similarity search.
  • Experience with vector databases (pgvector, Pinecone, Weaviate, FAISS, or Milvus).
  • Experience with model versioning tools (MLflow, Weights & Biases, or Neptune).
  • Hands-on with Docker, Kubernetes basics, and AWS cloud services.
  • Skilled in Git workflows, automated testing (pytest), and CI/CD practices.
  • Understanding of security principles for AI systems.
  • Excellent communication and analytical thinking.

Nice to Have

  • Experience with multiple vector databases (Pinecone, Weaviate, FAISS, Milvus).
  • Knowledge of advanced LLM fine-tuning (LoRA, QLoRA, PEFT) and RLHF.
  • Experience with model serving frameworks and distributed training.
  • Familiarity with workflow orchestration tools (Airflow, Prefect, Dagster).
  • Knowledge of quantization and model compression techniques.
  • Experience with infrastructure as code (Terraform, CloudFormation).
  • Familiarity with data versioning tools (DVC) and AutoML.
  • Experience with Streamlit or Gradio for ML demos.
  • Background in statistics, optimization, or applied mathematics.
  • Contributions to AI/ML or LangChain/LangGraph open-source projects.


Read more
Forbes Advisor

at Forbes Advisor

3 candid answers
Bisman Gill
Posted by Bisman Gill
Remote only
4yrs+
Upto ₹40L / yr (Varies
)
skill iconPython
SQL
Database performance tuning
Data-flow analysis
Data modeling

About Forbes Advisor

Forbes Digital Marketing Inc. is a high-growth digital media and technology company dedicated to helping consumers make confident, informed decisions about their money, health, careers, and everyday life.

We do this by combining data-driven content, rigorous product comparisons, and user-first design — all built on top of a modern, scalable platform. Our global teams bring deep expertise across journalism, product, performance marketing, data, and analytics.

 

The Role

We’re hiring a Data Scientist to help us unlock growth through advanced analytics and machine learning. This role sits at the intersection of marketing performance, product optimization, and decision science.


You’ll partner closely with Paid Media, Product, and Engineering to build models, generate insight, and influence how we acquire, retain, and monetize users. From campaign ROI to user segmentation and funnel optimization, your work will directly shape how we grow.This role is ideal for someone who thrives on business impact, communicates clearly, and wants to build re-usable, production-ready insights — not just run one-off analyses.

 

What You’ll Do

Marketing & Revenue Modelling

• Own end-to-end modelling of LTV, user segmentation, retention, and marketing

efficiency to inform media optimization and value attribution.

• Collaborate with Paid Media and RevOps to optimize SEM performance, predict high-

value cohorts, and power strategic bidding and targeting.

Product & Growth Analytics

• Work closely with Product Insights and General Managers (GMs) to define core metrics, KPIs, and success frameworks for new launches and features.

• Conduct deep-dive analysis of user behaviour, funnel performance, and product engagement to uncover actionable insights.

• Monitor and explain changes in key product metrics, identifying root causes and business impact.

• Work closely with Data Engineering to design and maintain scalable data pipelines that

support machine learning workflows, model retraining, and real-time inference.

Predictive Modelling & Machine Learning

• Build predictive models for conversion, churn, revenue, and engagement using regression, classification, or time-series approaches.

• Identify opportunities for prescriptive analytics and automation in key product and marketing workflows.

• Support development of reusable ML pipelines for production-scale use cases in product recommendation, lead scoring, and SEM planning.

Collaboration & Communication

• Present insights and recommendations to a variety of stakeholders — from ICs to executives — in a clear and compelling manner.

• Translate business needs into data problems, and complex findings into strategic action plans.

• Work cross-functionally with Engineering, Product, BI, and Marketing to deliver and deploy your work.

 

What You’ll Bring

Minimum Qualifications

• Bachelor’s degree in a quantitative field (Mathematics, Statistics, CS, Engineering, etc.).

• 4+ years in data science, growth analytics, or decision science roles.

• Strong SQL and Python skills (Pandas, Scikit-learn, NumPy).

• Hands-on experience with Tableau, Looker, or similar BI tools.

• Familiarity with LTV modelling, retention curves, cohort analysis, and media attribution.

• Experience with GA4, Google Ads, Meta, or other performance marketing platforms.

• Clear communication skills and a track record of turning data into decisions.


Nice to Have

• Experience with BigQuery and Google Cloud Platform (or equivalent).

• Familiarity with affiliate or lead-gen business models.

• Exposure to NLP, LLMs, embeddings, or agent-based analytics.

• Ability to contribute to model deployment workflows (e.g., using Vertex AI, Airflow, or Composer).

 

Why Join Us?

• Remote-first and flexible — work from anywhere in India with global exposure.

• Monthly long weekends (every third Friday off).

• Generous wellness stipends and parental leave.

• A collaborative team where your voice is heard and your work drives real impact.

• Opportunity to help shape the future of data science at one of the world’s most trusted

brands.

Read more
LogIQ Labs Pvt.Ltd.

at LogIQ Labs Pvt.Ltd.

2 recruiters
HR eShipz
Posted by HR eShipz
Bengaluru (Bangalore)
3 - 5 yrs
₹8L - ₹12L / yr
skill iconPython
API
SQL

The Python Support Engineer is responsible for providing Tier 2/Tier 3 technical support for our software applications and systems, focusing primarily on components built with Python. This role involves diagnosing and resolving complex production issues, performing Root Cause Analysis (RCA), developing temporary workarounds, and implementing permanent code fixes using Python.

🛠️ Key Responsibilities

1. Technical Support & Troubleshooting

  • Diagnose and Resolve Issues: Act as the escalation point for complex technical issues related to Python applications, backend services (APIs), and data pipelines.
  • Log and Data Analysis: Utilize advanced analytical skills and Python scripts (e.g., using Pandas or regular expressions) to parse system logs, database records, and monitoring data to pinpoint the root cause of failures.
  • Debugging and Fixes: Read, understand, debug, and modify existing Python code to implement necessary bug fixes, patches, and minor enhancements.
  • Database Interaction: Write and execute complex SQL queries to investigate data integrity issues and system performance problems across various relational (e.g., PostgreSQL, MySQL) and NoSQL databases.

2. Development and Automation

  • Automation: Develop and maintain Python scripts and utility tools (e.g., using Bash/Shell scripting) to automate repetitive support tasks, streamline system health checks, and improve incident response efficiency.
  • Monitoring and Alerting: Configure and fine-tune monitoring tools (e.g., Prometheus, Grafana, ELK stack) to proactively detect issues and ensure system reliability.
  • Documentation: Create and maintain detailed technical documentation, including RCAs, knowledge base articles, runbooks, and troubleshooting guides for the support team.


Read more
Technology Industry
Pune
10 - 14 yrs
₹15L - ₹40L / yr
User Research
skill iconGoogle Analytics
skill iconData Analytics
Mixpanel
CleverTap
+10 more

Review Criteria

  • Strong Lead – User Research & Analyst profile (behavioural/user/product/ux analytics)
  • 10+ years of experience in Behavioral Data Analytics, User Research, or Product Insights, driving data-informed decision-making for B2C digital products (web and app).
  • 6 months+ experience in analyzing user journeys, clickstream, and behavioral data using tools such as Google Analytics, Mixpanel, CleverTap, Firebase, or Amplitude.
  • Experience in leading cross-functional user research and analytics initiatives in collaboration with Product, Design, Engineering, and Business teams to translate behavioral insights into actionable strategies.
  • Strong expertise in A/B testing and experimentation, including hypothesis design, execution, statistical validation, and impact interpretation.
  • Ability to identify behavioral patterns, funnel drop-offs, engagement trends, and user journey anomalies using large datasets and mixed-method analysis.
  • Hands-on proficiency in SQL, Excel, and data visualization/storytelling tools such as Tableau, Power BI, or Looker for executive reporting and dashboard creation.
  • Deep understanding of UX principles, customer journey mapping, and product experience design, with experience integrating qualitative and quantitative insights.

 

Preferred

  • Ability to build insightful dashboards and executive reports highlighting user engagement, retention, and behavioral metrics; familiarity with mixed-method research, AI-assisted insight tools (Dovetail, EnjoyHQ, Qualtrics, UserZoom), and mentoring junior researchers


Job Specific Criteria

  • CV Attachment is mandatory
  • We have an alternate Saturday’s working. Are you comfortable to WFH on 1st and 4th Saturday?

 

Role & Responsibilities

Product Conceptualization & UX Strategy Development:

  • Conceptualize customer experience strategies
  • Collaborate with product managers to conceptualize new products & align UX with product roadmaps.
  • Develop and implement UX strategies that align with business objectives.
  • Stay up-to-date with industry trends and best practices in UX & UI for AI.
  • Assist in defining product requirements and features.
  • Use data analytics to inform product strategy and prioritize features.
  • Ensure product alignment with customer needs and business goals.Develop platform blueprints that include a features and functionalities map, ecosystem map, and information architecture.
  • Create wireframes, prototypes, and mock-ups using tools like Figma
  • Conduct usability testing and iterate designs based on feedback
  • Employ tools like X-Mind for brainstorming and mind mapping


Customer Journey Analysis:

  • Understand and map out customer journeys and scenarios.
  • Identify pain points and opportunities for improvement.
  • Develop customer personas and empathy maps.


Cross-Functional Collaboration:

  • Work closely with internal units such as UX Research, Design, UX Content, and UX QA to ensure seamless delivery of CX initiatives.
  • Coordinate with development teams to ensure UX designs are implemented accurately.


Data Analytics and Tools:

  • Utilize clickstream and analytics tools like Google Analytics, CleverTap, and Medallia to gather and analyse user data.
  • Leverage data to drive decisions and optimize customer experiences.
  • Strong background in data analytics, including proficiency in interpreting complex datasets to inform UX decisions.

 

Ideal Candidate

  • Bachelor’s or Master’s degree in a relevant field (e.g., UX Design, Human-Computer Interaction, Computer Science, Marketing).
  • 5+ years of experience in CX/UX roles, preferably in a B2C environment.
  • Proficiency in analytics tools (Google Analytics, CleverTap, Medallia, Hotjar, etc).
  • Strong understanding of wireframing and prototyping tools (Figma, XMind, etc).
  • Excellent communication and collaboration skills.
  • Proven experience in managing cross-functional teams and projects.
  • Strong background in data analytics and data-driven decision-making.
  • Expert understanding of user experience and user-centered design approaches
  • Detail-orientation with experience and will to continuously learn, adapt and evolve
  • Creating and measuring the success and impact of your CX designs
  • Knowledge of testing tools like Maze, UsabiltyHub, UserZoom would be a plus
  • Experienced in designing responsive websites as well as mobile apps
  • Understanding of iOS and Android design guidelines
  • Passion for great customer-focus design, purposeful aesthetic sense and generating simple solutions to complex problems.
  • Excellent communication skills to be able to present their work and ideas to the leadership team.


Read more
Tech AI startup in Bangalore

Tech AI startup in Bangalore

Agency job
via Recruit Square by Priyanka choudhary
Remote only
4 - 8 yrs
₹12L - ₹18L / yr
pandas
NumPy
MLOps
SQL
ETL
+1 more

Data Engineer – Validation & Quality


Responsibilities

  • Build rule-based and statistical validation frameworks using Pandas / NumPy.
  • Implement contradiction detection, reconciliation, and anomaly flagging.
  • Design and compute confidence metrics for each evidence record.
  • Automate schema compliance, sampling, and checksum verification across data sources.
  • Collaborate with the Kernel to embed validation results into every output artifact.

Requirements

  • 5 + years in data engineering, data quality, or MLOps validation.
  • Strong SQL optimization and ETL background.
  • Familiarity with data lineage, DQ frameworks, and regulatory standards (SOC 2 / GDPR).
Read more
Wissen Technology

at Wissen Technology

4 recruiters
Nishita Bangera
Posted by Nishita Bangera
Bengaluru (Bangalore)
4 - 7 yrs
₹5L - ₹25L / yr
skill iconPython
skill iconDjango
skill iconFlask
SQL
skill iconAmazon Web Services (AWS)
+1 more

🔧 Key Skills

  • Strong expertise in Python (3.x)
  • Experience with Django / Flask / FastAPI
  • Good understanding of Microservices & RESTful API development
  • Proficiency in MySQL/PostgreSQL – queries, stored procedures, optimization
  • Solid grip on Data Structures & Algorithms (DSA)
  • Comfortable working with Linux & Windows environments
  • Hands-on experience with Git, CI/CD (Jenkins/GitHub Actions)
  • Familiarity with Docker / Kubernetes is a plus


Read more
Antstack Technologies Pvt Ltd
Agency job
via AccioJob by AccioJobHiring Board
Bengaluru (Bangalore)
0 - 1 yrs
₹4.5L - ₹6L / yr
SQL
skill iconJavascript
RESTful APIs
skill iconPython

AccioJob is conducting a Walk-In Hiring Drive with AntStack for the position of Python Backend Developer.


To apply, register and select your slot here: https://go.acciojob.com/WUWVgb


Required Skills: Git, SQL, JavaScript, REST APIs, Cloud Platforms, Python


Eligibility:

  • Degree: BTech./BE, MTech./ME, BCA, MCA, BSc., MSc
  • Branch: All
  • Graduation Year: 2025


Work Details:

  • Work Location: Bangalore (Onsite)
  • CTC: 4.5 LPA to 6 LPA

Evaluation Process:

Round 1: Offline Assessment at AccioJob Bangalore Centre


Important Note: Bring your laptop & earphones for the test.


Further Rounds (for shortlisted candidates only):

  1. Resume Evaluation
  2. Technical Interview 1
  3. Technical Interview 2
  4. Technical Interview 3
  5. HR Discussion


Register here: https://go.acciojob.com/WUWVgb


Read more
Big Rattle Technologies
Sreelakshmi Nair (Big Rattle Technologies)
Posted by Sreelakshmi Nair (Big Rattle Technologies)
Remote, Mumbai
5 - 7 yrs
₹8L - ₹12L / yr
skill iconPython
SQL
skill iconMachine Learning (ML)
Data profiling
E2E
+8 more

Position: QA Engineer – Machine Learning Systems (5 - 7 years)

Location: Remote (Company in Mumbai)

Company: Big Rattle Technologies Private Limited


Immediate Joiners only.


Summary:

The QA Engineer will own quality assurance across the ML lifecycle—from raw data validation through feature engineering checks, model training/evaluation verification, batch prediction/optimization validation, and end-to-end (E2E) workflow testing. The role is hands-on with Python automation, data profiling, and pipeline test harnesses in Azure ML and Azure DevOps. Success means probably correct data, models, and outputs at production scale and cadence.


Key Responsibilities:

Test Strategy & Governance

  • ○ Define an ML-specific Test Strategy covering data quality KPIs, feature consistency
  • checks, model acceptance gates (metrics + guardrails), and E2E run acceptance
  • (timeliness, completeness, integrity).
  • ○ Establish versioned test datasets & golden baselines for repeatable regression of
  • features, models, and optimizers.


Data Quality & Transformation

  • Validate raw data extracts and landed data lake data: schema/contract checks, null/outlier thresholds, time-window completeness, duplicate detection, site/material coverage.
  • Validate transformed/feature datasets: deterministic feature generation, leakage detection, drift vs. historical distributions, feature parity across runs (hash or statistical similarity tests).
  • Implement automated data quality checks (e.g., Great Expectations/pytest + Pandas/SQL) executed in CI and AML pipelines.

Model Training & Evaluation

  • Verify training inputs (splits, windowing, target leakage prevention) and hyperparameter configs per site/cluster.
  • Automate metric verification (e.g., MAPE/MAE/RMSE, uplift vs. last model, stability tests) with acceptance thresholds and champion/challenger logic.
  • Validate feature importance stability and sensitivity/elasticity sanity checks (price/volume monotonicity where applicable).
  • Gate model registration/promotion in AML based on signed test artifacts and reproducible metrics.


Predictions, Optimization & Guardrails

  • Validate batch predictions: result shapes, coverage, latency, and failure handling.
  • Test model optimization outputs and enforced guardrails: detect violations and prove idempotent writes to DB.
  • Verify API push to third party system (idempotency keys, retry/backoff, delivery receipts).


Pipelines & E2E

  • Build pipeline test harnesses for AML pipelines (data-gen nightly, training weekly,
  • prediction/optimization) including orchestrated synthetic runs and fault injection
  • (missing slice, late competitor data, SB backlog).
  • Run E2E tests from raw data store -> ADLS -> AML -> RDBMS -> APIM/Frontend, assert
  • freshness SLOs and audit event completeness (Event Hubs -> ADLS immutable).


Automation & Tooling

  • Develop Python-based automated tests (pytest) for data checks, model metrics, and API contracts; integrate with Azure DevOps (pipelines, badges, gates).
  • Implement data-driven test runners (parameterized by site/material/model-version) and store signed test artifacts alongside models in AML Registry.
  • Create synthetic test data generators and golden fixtures to cover edge cases (price gaps, competitor shocks, cold starts).


Reporting & Quality Ops

  • Publish weekly test reports and go/no-go recommendations for promotions; maintain a defect taxonomy (data vs. model vs. serving vs. optimization).
  • Contribute to SLI/SLO dashboards (prediction timeliness, queue/DLQ, push success, data drift) used for release gates.


Required Skills (hands-on experience in the following):

  • Python automation (pytest, pandas, NumPy), SQL (PostgreSQL/Snowflake), and CI/CD (Azure
  • DevOps) for fully automated ML QA.
  • Strong grasp of ML validation: leakage checks, proper splits, metric selection
  • (MAE/MAPE/RMSE), drift detection, sensitivity/elasticity sanity checks.
  • Experience testing AML pipelines (pipelines/jobs/components), and message-driven integrations
  • (Service Bus/Event Hubs).
  • API test skills (FastAPI/OpenAPI, contract tests, Postman/pytest-httpx) + idempotency and retry
  • patterns.
  • Familiar with feature stores/feature engineering concepts and reproducibility.
  • Solid understanding of observability (App Insights/Log Analytics) and auditability requirements.


Required Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related field.
  • 5–7+ years in QA with 3+ years focused on ML/Data systems (data pipelines + model validation).
  • Certification in Azure Data or ML Engineer Associate is a plus.



Why should you join Big Rattle?

Big Rattle Technologies specializes in AI/ ML Products and Solutions as well as Mobile and Web Application Development. Our clients include Fortune 500 companies. Over the past 13 years, we have delivered multiple projects for international and Indian clients from various industries like FMCG, Banking and Finance, Automobiles, Ecommerce, etc. We also specialise in Product Development for our clients.

Big Rattle Technologies Private Limited is ISO 27001:2022 certified and CyberGRX certified.

What We Offer:

  • Opportunity to work on diverse projects for Fortune 500 clients.
  • Competitive salary and performance-based growth.
  • Dynamic, collaborative, and growth-oriented work environment.
  • Direct impact on product quality and client satisfaction.
  • 5-day hybrid work week.
  • Certification reimbursement.
  • Healthcare coverage.

How to Apply:

Interested candidates are invited to submit their resume detailing their experience. Please detail out your work experience and the kind of projects you have worked on. Ensure you highlight your contributions and accomplishments to the projects.


Read more
Hashone Careers

at Hashone Careers

2 candid answers
Madhavan I
Posted by Madhavan I
Bengaluru (Bangalore)
4 - 8 yrs
₹12L - ₹30L / yr
skill iconJava
skill iconReact.js
skill iconNextJs (Next.js)
SQL

Job Title:

Java Fullstack Developer - Bangalore


Job Description:


Experience: 4 - 8 years

Work Mode: 5 days Work from Office

Domain: Financial Services (Service-based Business)


Key Expectations: We are looking for Java full stack engineers who can contribute across both backend and frontend systems with minimal supervision and ensure end-to-end ownership of deliverables.


Technical Skills:   

  •     Strong hands-on experience with Java, Spring Boot, and Microservices
  •     Frontend development experience using React.js / Next.js
  •     Should have equal proficiency across both backend and frontend development
  •     Working knowledge of Kafka, MongoDB, Redis, and distributed systems
  •     Strong understanding of Core Java, LLD, and familiarity with Design Patterns and System Design concepts
  •     Hands-on experience in building applications with a focus on scalability, resilience, and performance
  •     Proficient in API integration, state management (Redux/Context API), and frontend optimization
  •     Experience working with SQL or NoSQL databases
  •     Familiarity with HTML5, CSS3, and JavaScript (ES6+) for responsive and maintainable UI development

   


Skills

java, MICROSERVICES, SPRING BOOT, REACT.JS, NEXTJS, KAFKA, SQL, MONGODB, LLD

Read more
Albert Invent

at Albert Invent

4 candid answers
3 recruiters
Nikita Sinha
Posted by Nikita Sinha
Bengaluru (Bangalore)
2 - 4 yrs
Upto ₹16L / yr (Varies
)
skill iconPython
SQL
AWS Lambda
Snow flake schema
Amazon Redshift


Responsibilities:

• Develop, and maintain SQL and NoSQL databases, ensuring high performance, scalability, and

reliability.

• Collaborate with the API team and Data Science team to build robust data pipelines and

automations.

• Work closely with stakeholders to understand database requirements and provide technical

solutions.

• Optimize database queries and performance tuning to enhance overall system efficiency.

• Implement and maintain data security measures, including access controls and encryption.

• Monitor database systems and troubleshoot issues proactively to ensure uninterrupted

service.

• Develop and enforce data quality standards and processes to maintain data integrity.

• Create and maintain documentation for database architecture, processes, and procedures.

• Stay updated with the latest database technologies and best practices to drive continuous

improvement.

• Expertise in SQL queries and stored procedures, with the ability to optimize and fine-tune

complex queries for performance and efficiency.

• Experience with monitoring and visualization tools such as Grafana to monitor database

performance and health


Requirements:


• Bachelor’s degree in Computer Science, Engineering, or equivalent experience

• 2+ years of experience in data engineering, with a focus on large-scale data systems.

• Proven experience designing data models and access patterns across SQL and NoSQL

ecosystems.

• Hands-on experience with technologies like SQL, DynamoDB, S3, and Lambda services.

• Proficient in SQL stored procedures with extensive expertise in MySQL schema design, query

optimization, and resolvers, along with hands-on experience in building and maintaining data

warehouses.

• Strong programming skills in Python or JavaScript, with the ability to write efficient,

maintainable code.

• Familiarity with observability stacks (Prometheus, Grafana, Open Telemetry) and debugging

production bottlenecks.

• Understanding cloud infrastructure (preferably AWS), including networking, IAM, and cost

optimization.

• Excellent communication and collaboration skills to influence cross-functional technical

decisions.

Read more
Prishusoft

at Prishusoft

1 recruiter
Shivani P
Posted by Shivani P
Ahmedabad
1 - 2 yrs
₹2L - ₹4L / yr
skill icon.NET
skill iconC#
skill iconJavascript
MVC Framework
ASP.NET
+4 more

Hiring: ASP.NET MVC / Core Developer

Location: Ahmedabad | Full-time (Onsite / Hybrid)

 

Apply: https://prishusoft.com/jobs/junior-aspnet-mvccore-professional

 

Experience

  • 1–4 years in web application development using ASP.NET technologies.


Key Responsibilities

  • Develop and maintain web apps using ASP.NET MVC/Core.
  • Build and consume RESTful APIs with Web API.
  • Collaborate with front-end and design teams for smooth integration.
  • Write optimized T-SQL queries and manage MS SQL Server.
  • Participate in code reviews and performance improvements.


Technical Skills

  • Proficient in C#, ASP.NET MVC/Core, and Web API.
  • Strong knowledge of JavaScript, HTML, and .NET Framework 4.5+.
  • Hands-on with SQL, indexing, and query optimization.


Bonus Skills

  • Experience with Angular / React / Vue.
  • Familiarity with TypeScript and unit testing (Jasmine, Karma).
  • Understanding of DevOps and CI/CD pipelines.


Soft Skills

  • Good communication and teamwork.
  • Positive attitude and eagerness to learn.


Join us in Ahmedabad and grow with a passionate tech team!

Read more
Kanerika Software

at Kanerika Software

3 candid answers
2 recruiters
Ariba Khan
Posted by Ariba Khan
Hyderabad, Indore, Ahmedabad
8 - 15 yrs
Upto ₹40L / yr (Varies
)
skill iconJava
skill iconSpring Boot
skill iconPostgreSQL
SQL

Job Summary: 


We are in search of a proficient Java Lead with a minimum of 10 years' experience in designing and developing Java applications. The ideal candidate will demonstrate a deep understanding of Java technologies, including Java EE, Spring Framework, and Hibernate. Proficiency in database technologies such as MySQL, Oracle, or PostgreSQL is essential, along with a proven track record of delivering high-quality, scalable, and efficient Java solutions. 


We are looking for you! 

You are a team player, get-it-done person, intellectually curious, customer focused, self motivated, responsible individual who can work under pressure with a positive attitude. You have the zeal to think differently, understand that career is a journey and make the choices right. You must have experience in creating visually compelling designs that effectively communicate our message and engage our target audience. Ideal candidates would be someone who is creative, proactive, go getter and motivated to look for ways to add value to job accomplishments. 


As an ideal candidate for the Java Lead position, you bring a wealth of experience and expertise in Java development, combined with strong leadership qualities. Your proven track record showcases your ability to lead and mentor teams to deliver high-quality, enterprise-grade applications. Your technical proficiency and commitment to excellence make you a valuable asset in driving innovation and success within our development projects. You possess a team oriented mindset and a "get-it-done" attitude, inspiring your team members to excel and collaborate effectively.  


You have a proven ability to lead mid to large size teams, emphasizing a quality-first approach and ensuring that projects are delivered on time and within scope. As a Java Lead, you are responsible for overseeing project planning, implementing best practices, and driving technical solutions that align with business objectives. You collaborate closely with development managers, architects, and cross-functional teams to design scalable and robust Java applications.


What You Will Do: 

  • Design and development of RESTful Web Services. 
  • Hands on database experience (Oracle / PostgreSQL / MySQL /SQL Server).
  • Hands on experience with developing web applications leveraging Spring Framework.
  • Hands on experience with developing microservices leveraging Spring Boot.
  • Experience with cloud platforms (e.g., AWS, Azure) and containerization technologies. 
  • Continuous Integration tools (Jenkins & Git Lab), CICD Tools.  
  • Strong believer and follower of agile methodologies with an emphasis on Quality & Standards based development. 
  • Architect, design, and implement complex software systems using [Specify relevant technologies, e.g., Java, Python, Node.js. 

What we need? 

  • BTech computer science or equivalent 
  • Minimum 8 years of relevant experience in Java/J2EE technologies
  • Experience in building back in API using Spring Boot Framework, Spring DI, Spring AOP
  • Real time messaging integration using Kafka or similar framework 
  • Experience in at least one database: Oracle, SQL server or PostgreSQL
  • Previous experience managing and leading high-performing software engineering teams

  

Why join us? 

  • Work with a passionate and innovative team in a fast-paced, growth-oriented environment. 
  • Gain hands-on experience in content marketing with exposure to real-world projects.
  • Opportunity to learn from experienced professionals and enhance your marketing skills. 
  • Contribute to exciting initiatives and make an impact from day one.
  • Competitive stipend and potential for growth within the company. 
  • Recognized for excellence in data and AI solutions with industry awards and accolades. 


Employee Benefits:

1. Culture:

  1. Open Door Policy: Encourages open communication and accessibility to management.
  2. Open Office Floor Plan: Fosters a collaborative and interactive work environment.
  3. Flexible Working Hours: Allows employees to have flexibility in their work schedules.
  4. Employee Referral Bonus: Rewards employees for referring qualified candidates.
  5. Appraisal Process Twice a Year: Provides regular performance evaluations and feedback.


2. Inclusivity and Diversity:

  1. Hiring practices that promote diversity: Ensures a diverse and inclusive workforce.
  2. Mandatory POSH training: Promotes a safe and respectful work environment.


3. Health Insurance and Wellness Benefits:

  1. GMC and Term Insurance: Offers medical coverage and financial protection.
  2. Health Insurance: Provides coverage for medical expenses.
  3. Disability Insurance: Offers financial support in case of disability.


4. Child Care & Parental Leave Benefits:

  1. Company-sponsored family events: Creates opportunities for employees and their       families to bond.
  2. Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child.
  3. Family Medical Leave: Offers leave for employees to take care of family members' medical needs.


5. Perks and Time-Off Benefits:

  1. Company-sponsored outings: Organizes recreational activities for employees.
  2. Gratuity: Provides a monetary benefit as a token of appreciation.
  3. Provident Fund: Helps employees save for retirement.
  4. Generous PTO: Offers more than the industry standard for paid time off.
  5. Paid sick days: Allows employees to take paid time off when they are unwell.
  6. Paid holidays: Gives employees paid time off for designated holidays.
  7. Bereavement Leave: Provides time off for employees to grieve the loss of a loved one.


6. Professional Development Benefits:

  1. L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development.
  2. Mentorship Program: Offers guidance and support from experienced professionals.
  3. Job Training: Provides training to enhance job-related skills.
  4. Professional Certification Reimbursements: Assists employees in obtaining professional      certifications.
  5. Promote from Within: Encourages internal growth and advancement opportunities.
Read more
NeoGenCode Technologies Pvt Ltd
Ritika Verma
Posted by Ritika Verma
Pune, Noida, Gurugram
5 - 8 yrs
₹13L - ₹17L / yr
ETL Tester
Azure Data factory
Azure SQL database
Azure ecosystem
SQL

Job Title: Sr. ETL Test Engineer

Experience: 7+ Years

Location: Gurgaon / Noida / Pune (Work From Office)

Joining: Immediate joiners only (≤15 days notice)

About the Role

We are seeking an experienced ETL Test Engineer with strong expertise in cloud-based ETL tools, Azure ecosystem, and advanced SQL skills. The ideal candidate will have a proven track record in validating complex data pipelines, ensuring data integrity, and collaborating with cross-functional teams in an Agile environment.

Key Responsibilities

  • Design, develop, and execute ETL test plans, test cases, and test scripts for cloud-based data pipelines.
  • Perform data validation, transformation, and reconciliation between source and target systems.
  • Work extensively with Azure Data Factory, Azure Synapse Analytics, Azure SQL Database, and related Azure services.
  • Develop and run complex SQL queries for data extraction, analysis, and validation.
  • Collaborate with developers, business analysts, and product owners to clarify requirements and ensure comprehensive test coverage.
  • Perform regression, functional, and performance testing of ETL processes.
  • Identify defects, log them, and work with development teams to ensure timely resolution.
  • Participate in Agile ceremonies (daily stand-ups, sprint planning, retrospectives) and contribute to continuous improvement.
  • Ensure adherence to data quality and compliance standards.

Required Skills & Experience

  • 5+ years of experience in ETL testing, preferably with cloud-based ETL tools.
  • Strong hands-on experience with Azure Data Factory, Azure Synapse Analytics, and Azure SQL.
  • Advanced SQL query writing and performance tuning skills.
  • Strong understanding of data warehousing concepts, data models, and data governance.
  • Experience with Agile methodologies and working in a Scrum team.
  • Excellent communication and stakeholder management skills.
  • Strong problem-solving skills and attention to detail.

Preferred Skills

  • Experience with Python, PySpark, or automation frameworks for ETL testing.
  • Exposure to CI/CD pipelines in Azure DevOps or similar tools.
  • Knowledge of data security, compliance, and privacy regulations.


Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort