50+ Data modeling Jobs in India
Apply to 50+ Data modeling Jobs on CutShort.io. Find your next job, effortlessly. Browse Data modeling Jobs and apply today!

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks
Job Title : Informatica MDM Developer
Experience : 7 to 10 Years
Location : Bangalore (3 Days Work From Office – ITPL Main Road, Mahadevapura)
Job Type : Full-time / Contract
Job Overview :
We are hiring an experienced Informatica MDM Developer to join our team in Bangalore. The ideal candidate will play a key role in implementing and customizing Master Data Management (MDM) solutions using Informatica MDM (Multi-Domain Edition), ensuring a trusted, unified view of enterprise data.
Mandatory Skills :
Informatica MDM (Multi-Domain Edition), ActiveVOS workflows, Java (User Exits), Services Integration Framework (SIF) APIs, SQL/PLSQL, Data Modeling, Informatica Data Quality (IDQ), MDM concepts (golden record, survivorship, trust, hierarchy).
Key Responsibilities :
- Configure Informatica MDM Hub : subject area models, base objects, relationships.
- Develop match/merge rules, trust/survivorship logic to create golden records.
- Design workflows using ActiveVOS for data stewardship and exception handling.
- Integrate with source/target systems (ERP, CRM, Data Lakes, APIs).
- Customize user exits (Java), SIF APIs, and business entity services.
- Implement and maintain data quality validations using IDQ.
- Collaborate with cross-functional teams for governance alignment.
- Support MDM jobs, synchronization, batch groups, and performance tuning.
Must-Have Skills :
- 7 to 10 years of experience in Data Engineering or MDM.
- 5+ years hands-on with Informatica MDM (Multi-Domain Edition).
- Strong in MDM concepts : golden record, trust, survivorship, hierarchy.
Proficient in :
- Informatica MDM Hub Console, Provisioning Tool, SIF.
- ActiveVOS workflows, Java-based user exits.
- SQL, PL/SQL, and data modeling.
- Experience with system integration and Informatica Data Quality (IDQ).
Nice-to-Have :
- Knowledge of Informatica EDC, Axon, cloud MDM (AWS/GCP/Azure).
- Understanding of data lineage, GDPR/HIPAA compliance, and DevOps tools.
Job Summary:
We are looking for a highly skilled and detail-oriented Database Architect – MS SQL Server with 5+ years of hands-on experience to lead the design, development, optimization, and maintenance of robust and scalable database solutions. You will play a key role in defining database architecture and standards, supporting development teams, and ensuring high performance, data integrity, and availability across our SQL Server environments.
Key Responsibilities:
- Architect, design, and implement secure and scalable database solutions using Microsoft SQL Server.
- Support software developers by writing complex T-SQL scripts, DML tuning, and stored procedure creation.
- Develop and maintain high-performance queries, views, triggers, and user-defined functions.
- Analyze and optimize existing SQL queries for performance improvement and resource efficiency.
- Collaborate closely with application developers, product managers, and DevOps teams to align data architecture with business and technical needs.
- Monitor, maintain, and troubleshoot database systems to ensure optimal performance and uptime.
- Design and implement solutions for database backup, restoration, migration, and replication.
- Enforce data integrity, access control, and user permission strategies across all environments.
- Create and manage constraints, indexes, and partitioning to support performance at scale.
- Document database architecture, configurations, and best practices.
- Plan and execute database upgrades, version control, and migration strategies.
Requirements:
- 5+ years of proven experience in SQL Server database development and administration.
- Strong command of T-SQL programming and query optimization techniques.
- Deep understanding of database design principles, normalization, and data modeling.
- Experience working with high availability and disaster recovery strategies (e.g., AlwaysOn, log shipping, mirroring).
- Strong experience with indexes, constraints, triggers, and performance tuning tools.
- Familiarity with SQL Server Agent, SSIS, and automation of routine tasks.
- Solid understanding of data security, compliance, and role-based access controls.
- Ability to work independently and collaboratively in agile, cross-functional teams.
- Excellent analytical, problem-solving, and communication skills.
Preferred Qualifications:
- Bachelor's degree in Computer Science, Engineering, or a related technical field.
- Microsoft certifications (e.g., Microsoft Certified: Azure Database Administrator Associate or equivalent).
- Experience with cloud-based SQL Server instances (Azure or AWS) is a plus.
- Exposure to DevOps practices, CI/CD pipelines, or Infrastructure as Code tools.
Required Skills:
● 6+ years of experience with hybrid data environments that leverage both distributed and relational database technologies to support analytics services (Oracle, IMB DB2, GCP)
● Solid understanding of data warehousing principles, architecture, and its implementation in complex environments.
● Good experience in OLTP and OLAP systems
● Excellent Data Analysis skills
● Good understanding of one or more ETL tools and data ingestion frameworks.
● Experience as a designer of complex Dimensional data models for analytics services
● Experience with various testing methodologies and user acceptance testing.
● Experience on one or more cloud platforms (e.g. AWS, Azure, GCP) ● Understanding of Data Quality and Data Governance
● Understanding of Industry Data Models
● Experience in leading the large teams
● Experience with processing large datasets from multiple sources.
● Ability to operate effectively and independently in a dynamic, fluid environment.
● Good understanding of agile methodology
● Strong verbal and written communications skills with experience in relating complex concepts to non-technical users.
● Demonstrated ability to exchange ideas and convey complex information clearly and concisely
● Proven ability to lead and drive projects and assignments to completion
● Exposure to Data Modeling Tools
○ ERwin ○ Power Designer ○ Business Glossary ○ ER/Studio ○ Enterprise Architect, ○ MagicDraw
Job Summary:
Position : Senior Power BI Developer
Experience : 4+Years
Location : Ahmedabad - WFO
Key Responsibilities:
- Design, develop, and maintain interactive and user-friendly Power BI dashboards and
- reports.
- Translate business requirements into functional and technical specifications.
- Perform data modeling, DAX calculations, and Power Query transformations.
- Integrate data from multiple sources including SQL Server, Excel, SharePoint, and APIs.
- Optimize Power BI datasets, reports, and dashboards for performance and usability.
- Collaborate with business analysts, data engineers, and stakeholders to ensure data accuracy
- and relevance.
- Ensure security and governance best practices in Power BI workspaces and datasets.
- Provide ongoing support and troubleshooting for existing Power BI solutions.
- Stay updated with Power BI updates, best practices, and industry trends.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Data Analytics, or a
- related field.
- 4+ years of professional experience in data analytics or business intelligence.
- 3+ years of hands-on experience with Power BI (Power BI Desktop, Power BI Service).
- Strong expertise in DAX, Power Query (M Language), and data modeling (star/snowflake schema).
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Experience in working with large and complex datasets.
- Experience in BigQuery, MySql, Looker Studio is a plus.
- Ecommerce Industry Experience will be an added advantage.
- Solid understanding of data warehousing concepts and ETL processes.
- Experience with version control tools such as Power Apps & Power Automate would be a plus.
Preferred Qualifications:
- Microsoft Power BI Certification (PL-300 or equivalent is a plus). Experience with Azure Data Services (Azure Data Factory, Azure SQL, Synapse).
- Knowledge of other BI tools (Tableau, Qlik) is a plus.
- Familiarity with scripting languages (Python, R) for data analysis is a bonus.
- Experience integrating Power BI into web portals using Power BI Embedded.
Job Title : Senior Data Engineer
Experience : 6 to 10 Years
Location : Gurgaon (Hybrid – 3 days office / 2 days WFH)
Notice Period : Immediate to 30 days (Buyout option available)
About the Role :
We are looking for an experienced Senior Data Engineer to join our Digital IT team in Gurgaon.
This role involves building scalable data pipelines, managing data architecture, and ensuring smooth data flow across the organization while maintaining high standards of security and compliance.
Mandatory Skills :
Azure Data Factory (ADF), Azure Cloud Services, SQL, Data Modelling, CI/CD tools, Git, Data Governance, RDBMS & NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch), Data Lake migration.
Key Responsibilities :
- Design and develop secure, scalable end-to-end data pipelines using Azure Data Factory (ADF) and Azure services.
- Build and optimize data architectures (including Medallion Architecture).
- Collaborate with cross-functional teams on cybersecurity, data privacy (e.g., GDPR), and governance.
- Manage structured/unstructured data migration to Data Lake.
- Ensure CI/CD integration for data workflows and version control using Git.
- Identify and integrate data sources (internal/external) in line with business needs.
- Proactively highlight gaps and risks related to data compliance and integrity.
Required Skills :
- Azure Data Factory (ADF) – Mandatory
- Strong SQL and Data Modelling expertise.
- Hands-on with Azure Cloud Services and data architecture.
- Experience with CI/CD tools and version control (Git).
- Good understanding of Data Governance practices.
- Exposure to ETL/ELT pipelines and Data Lake migration.
- Working knowledge of RDBMS and NoSQL databases (e.g., SQL Server, PostgreSQL, Redis, ElasticSearch).
- Understanding of RESTful APIs, deployment on cloud/on-prem infrastructure.
- Strong problem-solving, communication, and collaboration skills.
Additional Info :
- Work Mode : Hybrid (No remote); relocation to Gurgaon required for non-NCR candidates.
- Communication : Above-average verbal and written English skills.
Perks & Benefits :
- 5 Days work week
- Global exposure and leadership collaboration.
- Health insurance, employee-friendly policies, training and development.
Job Description :
We are seeking a highly experienced Sr Data Modeler / Solution Architect to join the Data Architecture team at Corporate Office in Bangalore. The ideal candidate will have 4 to 8 years of experience in data modeling and architecture with deep expertise in AWS cloud stack, data warehousing, and enterprise data modeling tools. This individual will be responsible for designing and creating enterprise-grade data models and driving the implementation of Layered Scalable Architecture or Medallion Architecture to support robust, scalable, and high-quality data marts across multiple business units.
This role will involve managing complex datasets from systems like PoS, ERP, CRM, and external sources, while optimizing performance and cost. You will also provide strategic leadership on data modeling standards, governance, and best practices, ensuring the foundation for analytics and reporting is solid and future-ready.
Key Responsibilities:
· Design and deliver conceptual, logical, and physical data models using tools like ERWin.
· Implement Layered Scalable Architecture / Medallion Architecture for building scalable, standardized data marts.
· Optimize performance and cost of AWS-based data infrastructure (Redshift, S3, Glue, Lambda, etc.).
· Collaborate with cross-functional teams (IT, business, analysts) to gather data requirements and ensure model alignment with KPIs and business logic.
· Develop and optimize SQL code, materialized views, stored procedures in AWS Redshift.
· Ensure data governance, lineage, and quality mechanisms are established across systems.
· Lead and mentor technical teams in an Agile project delivery model.
· Manage data layer creation and documentation: data dictionary, ER diagrams, purpose mapping.
· Identify data gaps and availability issues with respect to source systems.
Required Skills & Qualifications:
· Bachelor’s or Master’s degree in Computer Science, IT, or related field (B.E./B.Tech/M.E./M.Tech/MCA).
· Minimum 4 years of experience in data modeling and architecture.
· Proficiency with data modeling tools such as ERWin, with strong knowledge of forward and reverse engineering.
· Deep expertise in SQL (including advanced SQL, stored procedures, performance tuning).
· Strong experience in data warehousing, RDBMS, and ETL tools like AWS Glue, IBM DataStage, or SAP Data Services.
· Hands-on experience with AWS services: Redshift, S3, Glue, RDS, Lambda, Bedrock, and Q.
· Good understanding of reporting tools such as Tableau, Power BI, or AWS QuickSight.
· Exposure to DevOps/CI-CD pipelines, AI/ML, Gen AI, NLP, and polyglot programming is a plus.
· Familiarity with data governance tools (e.g., ORION/EIIG).
· Domain knowledge in Retail, Manufacturing, HR, or Finance preferred.
· Excellent written and verbal communication skills.
Certifications (Preferred):
· AWS Certification (e.g., AWS Certified Solutions Architect or Data Analytics – Specialty)
· Data Governance or Data Modeling Certifications (e.g., CDMP, Databricks, or TOGAF)
Mandatory Skills
aws, Technical Architecture, Aiml, SQL, Data Warehousing, Data Modelling
Job Description:
We are seeking a skilled Power BI Developer with a strong understanding of Capital Markets to join our data analytics team. The ideal candidate will be responsible for designing, developing, and maintaining interactive dashboards and reports that provide insights into trading, risk, and financial performance. This role requires experience working with capital market data sets and a solid grasp of financial instruments and market operations.
Key Responsibilities:
- Develop interactive Power BI dashboards and reports tailored to capital markets (e.g., equities, derivatives, fixed income).
- Connect to and integrate data from various sources such as Bloomberg, Reuters, SQL databases, and Excel.
- Translate business requirements into data models and visualizations that provide actionable insights.
- Optimize Power BI reports for performance, usability, and scalability.
- Work closely with business stakeholders (trading, risk, compliance) to understand KPIs and analytics needs.
- Implement row-level security and data access controls.
- Maintain data quality, lineage, and versioning documentation.
Required Skills & Qualifications:
- 3+ years of experience with Power BI (Power Query, DAX, data modeling).
- Strong understanding of capital markets: trading workflows, market data, instruments (equities, bonds, derivatives, etc.).
- Experience with SQL and working with large financial datasets.
- Familiarity with risk metrics, trade lifecycle, and financial statement analysis.
- Knowledge of data governance, security, and performance tuning in BI environments.
- Excellent communication skills and ability to work with cross-functional teams.
Preferred Qualifications:
- Experience with Python or R for data analysis.
- Knowledge of investment banking or asset management reporting frameworks.
- Exposure to cloud platforms like Azure, AWS, or GCP.
- Certifications in Power BI or Capital Markets.
Job Title : Solution Architect – Denodo
Experience : 10+ Years
Location : Remote / Work from Home
Notice Period : Immediate joiners preferred
Job Overview :
We are looking for an experienced Solution Architect – Denodo to lead the design and implementation of data virtualization solutions. In this role, you will work closely with cross-functional teams to ensure our data architecture aligns with strategic business goals. The ideal candidate will bring deep expertise in Denodo, strong technical leadership, and a passion for driving data-driven decisions.
Mandatory Skills : Denodo, Data Virtualization, Data Architecture, SQL, Data Modeling, ETL, Data Integration, Performance Optimization, Communication Skills.
Key Responsibilities :
- Architect and design scalable data virtualization solutions using Denodo.
- Collaborate with business analysts and engineering teams to understand requirements and define technical specifications.
- Ensure adherence to best practices in data governance, performance, and security.
- Integrate Denodo with diverse data sources and optimize system performance.
- Mentor and train team members on Denodo platform capabilities.
- Lead tool evaluations and recommend suitable data integration technologies.
- Stay updated with emerging trends in data virtualization and integration.
Required Qualifications :
- Bachelor’s degree in Computer Science, IT, or a related field.
- 10+ Years of experience in data architecture and integration.
- Proven expertise in Denodo and data virtualization frameworks.
- Strong proficiency in SQL and data modeling.
- Hands-on experience with ETL processes and data integration tools.
- Excellent communication, presentation, and stakeholder management skills.
- Ability to lead technical discussions and influence architectural decisions.
- Denodo or data architecture certifications are a strong plus.
Job Summary:
Position : Senior Power BI Developer
Experience : 4+Years
Location : Ahmedabad - WFO
Key Responsibilities:
- Design, develop, and maintain interactive and user-friendly Power BI dashboards and
- reports.
- Translate business requirements into functional and technical specifications.
- Perform data modeling, DAX calculations, and Power Query transformations.
- Integrate data from multiple sources including SQL Server, Excel, SharePoint, and APIs.
- Optimize Power BI datasets, reports, and dashboards for performance and usability.
- Collaborate with business analysts, data engineers, and stakeholders to ensure data accuracy
- and relevance.
- Ensure security and governance best practices in Power BI workspaces and datasets.
- Provide ongoing support and troubleshooting for existing Power BI solutions.
- Stay updated with Power BI updates, best practices, and industry trends.
Required Skills & Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, Data Analytics, or a
- related field.
- 4+ years of professional experience in data analytics or business intelligence.
- 3+ years of hands-on experience with Power BI (Power BI Desktop, Power BI Service).
- Strong expertise in DAX, Power Query (M Language), and data modeling (star/snowflake schema).
- Proficiency in writing complex SQL queries and optimizing them for performance.
- Experience in working with large and complex datasets.
- Experience in BigQuery, MySql, Looker Studio is a plus.
- Ecommerce Industry Experience will be an added advantage.
- Solid understanding of data warehousing concepts and ETL processes.
- Experience with version control tools such as Power Apps & Power Automate would be a plus.
Preferred Qualifications:
- Microsoft Power BI Certification (PL-300 or equivalent is a plus). Experience with Azure Data Services (Azure Data Factory, Azure SQL, Synapse).
- Knowledge of other BI tools (Tableau, Qlik) is a plus.
- Familiarity with scripting languages (Python, R) for data analysis is a bonus.
- Experience integrating Power BI into web portals using Power BI Embedded.
Job Title : Senior Software Engineer – Backend
Experience Required : 6 to 12 Years
Location : Bengaluru (Hybrid – 3 Days Work From Office)
Number of Openings : 2
Work Hours : 11:00 AM – 8:00 PM IST
Notice Period : 30 Days Preferred
Work Location : SmartWorks The Cube, Karle Town SEZ, Building No. 5, Nagavara, Bangalore – 560045
Note : Face-to-face interview in Bangalore is mandatory during the second round.
Role Overview :
We are looking for an experienced Senior Backend Developer to join our growing team. This is a hands-on role focused on building cloud-based, scalable applications in the mortgage finance domain.
Key Responsibilities :
- Design, develop, and maintain backend components for cloud-based web applications.
- Contribute to architectural decisions involving microservices and distributed systems.
- Work extensively with Node.js and RESTful APIs.
- Implement scalable solutions using AWS services (e.g., Lambda, SQS, SNS, RDS).
- Utilize both relational and NoSQL databases effectively.
- Collaborate with cross-functional teams to deliver robust and maintainable code.
- Participate in agile development practices and deliver rapid iterations based on feedback.
- Take ownership of system performance, scalability, and reliability.
Core Requirements :
- 5+ Years of total experience in backend development.
- Minimum 3 Years of experience in building scalable microservices or delivering large-scale products.
- Strong expertise in Node.js and REST APIs.
- Solid experience with RDBMS, SQL, and data modeling.
- Good understanding of distributed systems, scalability, and availability.
- Familiarity with AWS infrastructure and services.
- Development experience in Python and/or Java is a plus.
Preferred Skills :
- Experience with frontend frameworks like React.js or AngularJS.
- Working knowledge of Docker and containerized applications.
Interview Process :
- Round 1 : Online technical assessment (1 hour)
- Round 2 : Virtual technical interview
- Round 3 : In-person interview at the Bangalore office (2 hours – mandatory)

Role: Data Engineer (14+ years of experience)
Location: Whitefield, Bangalore
Mode of Work: Hybrid (3 days from office)
Notice period: Immediate/ Serving with 30days left
Location: Candidate should be based out of Bangalore as one round has to be taken F2F
Job Summary:
Role and Responsibilities
● Design and implement scalable data pipelines for ingesting, transforming, and loading data from various tools and sources.
● Design data models to support data analysis and reporting.
● Automate data engineering tasks using scripting languages and tools.
● Collaborate with engineers, process managers, data scientists to understand their needs and design solutions.
● Act as a bridge between the engineering and the business team in all areas related to Data.
● Automate monitoring and alerting mechanism on data pipelines, products and Dashboards and troubleshoot any issues. On call requirements.
● SQL creation and optimization - including modularization and optimization which might need views, table creation in the sources etc.
● Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards
● QA environment data management - e.g Test Data Management etc
Qualifications
● 14+ years of experience as a Data engineer or related role.
● Experience with Agile engineering practices.
● Strong experience in writing queries for RDBMS, cloud-based data warehousing solutions like Snowflake and Redshift.
● Experience with SQL and NoSQL databases.
● Ability to work independently or as part of a team.
● Experience with cloud platforms, preferably AWS.
● Strong experience with data warehousing and data lake technologies (Snowflake)
● Expertise in data modelling
● Experience with ETL/LT tools and methodologies .
● 5+ years of experience in application development including Python, SQL, Scala, or Java
● Experience working on real-time Data Streaming and Data Streaming platform.
NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.

Location: Mumbai
Job Type: Full-Time (Hybrid – 3 days in office, 2 days WFH)
Job Overview:
We are looking for a skilled Azure Data Engineer with strong experience in data modeling, pipeline development, and SQL/Spark expertise. The ideal candidate will work closely with the Data Analytics & BI teams to implement robust data solutions on Azure Synapse and ensure seamless data integration with third-party applications.
Key Responsibilities:
- Design, develop, and maintain Azure data pipelines using Azure Synapse (SQL dedicated pools or Apache Spark pools).
- Implement data models in collaboration with the Data Analytics and BI teams.
- Optimize and manage large-scale SQL and Spark-based data processing solutions.
- Ensure data availability and reliability for third-party application consumption.
- Collaborate with cross-functional teams to translate business requirements into scalable data solutions.
Required Skills & Experience:
3–5 years of hands-on experience in:
- Azure data services
- Data Modeling
- SQL development and tuning
- Apache Spark
- Strong knowledge of Azure Synapse Analytics.
- Experience in designing data pipelines and ETL/ELT processes.
- Ability to troubleshoot and optimize complex data workflows.
Preferred Qualifications:
- Experience with data governance, security, and data quality practices.
- Familiarity with DevOps practices in a data engineering context.
- Effective communication skills and the ability to work in a collaborative team environment.
Involved in capex,modelling, Budgeting,investment decision making
Should be able to converse well with global stakeholders
O&G, Metals and mining heavy manufacturing Preferred
Others - not a frequent job hopper
Margin optimization -holistic understanding of the O&G P&L (not limited to 1-2 line items only)
Job Title : Cognos BI Developer
Experience : 6+ Years
Location : Bangalore / Hyderabad (Hybrid)
Notice Period : Immediate Joiners Preferred (Candidates serving notice with 10–15 days left can be considered)
Interview Mode : Virtual
Job Description :
We are seeking an experienced Cognos BI Developer with strong data modeling, dashboarding, and reporting expertise to join our growing team. The ideal candidate should have a solid background in business intelligence, data visualization, and performance analysis, and be comfortable working in a hybrid setup from Bangalore or Hyderabad.
Mandatory Skills :
Cognos BI, Framework Manager, Cognos Dashboarding, SQL, Data Modeling, Report Development (charts, lists, cross tabs, maps), ETL Concepts, KPIs, Drill-through, Macros, Prompts, Filters, Calculations.
Key Responsibilities :
- Understand business requirements in the BI context and design data models using Framework Manager to transform raw data into meaningful insights.
- Develop interactive dashboards and reports using Cognos Dashboard.
- Identify and define KPIs and create reports to monitor them effectively.
- Analyze data and present actionable insights to support business decision-making.
- Translate business requirements into technical specifications and determine timelines for execution.
- Design and develop models in Framework Manager, publish packages, manage security, and create reports based on these packages.
- Develop various types of reports, including charts, lists, cross tabs, and maps, and design dashboards combining multiple reports.
- Implement reports using macros, prompts, filters, and calculations.
- Perform data warehouse development activities and ensure seamless data flow.
- Write and optimize SQL queries to investigate data and resolve performance issues.
- Utilize Cognos features such as master-detail reports, drill-throughs, bookmarks, and page sets.
- Analyze and improve ETL processes to enhance data integration.
- Apply technical enhancements to existing BI systems to improve their performance and usability.
- Possess solid understanding of database fundamentals, including relational and multidimensional database design.
- Hands-on experience with Cognos Data Modules (data modeling) and dashboarding.
Job Title : Data Engineer – Snowflake Expert
Location : Pune (Onsite)
Experience : 10+ Years
Employment Type : Contractual
Mandatory Skills : Snowflake, Advanced SQL, ETL/ELT (Snowpipe, Tasks, Streams), Data Modeling, Performance Tuning, Python, Cloud (preferably Azure), Security & Data Governance.
Job Summary :
We are seeking a seasoned Data Engineer with deep expertise in Snowflake to design, build, and maintain scalable data solutions.
The ideal candidate will have a strong background in data modeling, ETL/ELT, SQL optimization, and cloud data warehousing principles, with a passion for leveraging Snowflake to drive business insights.
Responsibilities :
- Collaborate with data teams to optimize and enhance data pipelines and models on Snowflake.
- Design and implement scalable ELT pipelines with performance and cost-efficiency in mind.
- Ensure high data quality, security, and adherence to governance frameworks.
- Conduct code reviews and align development with best practices.
Qualifications :
- Bachelor’s in Computer Science, Data Science, IT, or related field.
- Snowflake certifications (Pro/Architect) preferred.
- Extract Transform Load (ETL) and ETL Tools skills
- Data Modeling and Data Integration expertise
- Data Warehousing knowledge
- Experience in working with SQL databases
- Strong analytical and problem-solving abilities
- Excellent communication and interpersonal skills
- Bachelor's degree in Computer Science, Information Systems, or related field
- Relevant certifications in ETL Testing or Data Warehousing

About the Role:
We are seeking a talented Lead Data Engineer to join our team and play a pivotal role in transforming raw data into valuable insights. As a Data Engineer, you will design, develop, and maintain robust data pipelines and infrastructure to support our organization's analytics and decision-making processes.
Responsibilities:
- Data Pipeline Development: Build and maintain scalable data pipelines to extract, transform, and load (ETL) data from various sources (e.g., databases, APIs, files) into data warehouses or data lakes.
- Data Infrastructure: Design, implement, and manage data infrastructure components, including data warehouses, data lakes, and data marts.
- Data Quality: Ensure data quality by implementing data validation, cleansing, and standardization processes.
- Team Management: Able to handle team.
- Performance Optimization: Optimize data pipelines and infrastructure for performance and efficiency.
- Collaboration: Collaborate with data analysts, scientists, and business stakeholders to understand their data needs and translate them into technical requirements.
- Tool and Technology Selection: Evaluate and select appropriate data engineering tools and technologies (e.g., SQL, Python, Spark, Hadoop, cloud platforms).
- Documentation: Create and maintain clear and comprehensive documentation for data pipelines, infrastructure, and processes.
Skills:
- Strong proficiency in SQL and at least one programming language (e.g., Python, Java).
- Experience with data warehousing and data lake technologies (e.g., Snowflake, AWS Redshift, Databricks).
- Knowledge of cloud platforms (e.g., AWS, GCP, Azure) and cloud-based data services.
- Understanding of data modeling and data architecture concepts.
- Experience with ETL/ELT tools and frameworks.
- Excellent problem-solving and analytical skills.
- Ability to work independently and as part of a team.
Preferred Qualifications:
- Experience with real-time data processing and streaming technologies (e.g., Kafka, Flink).
- Knowledge of machine learning and artificial intelligence concepts.
- Experience with data visualization tools (e.g., Tableau, Power BI).
- Certification in cloud platforms or data engineering.
Bangalore / Chennai
- Hands-on data modelling for OLTP and OLAP systems
- In-depth knowledge of Conceptual, Logical and Physical data modelling
- Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
- Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
- Should have working experience on at least one data modelling tool, preferably DBSchema, Erwin
- Good understanding of GCP databases like AlloyDB, CloudSQL, and BigQuery.
- People with functional knowledge of the mutual fund industry will be a plus
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
Required Skills:
● Bachelor’s degree in Computer Science or similar field or equivalent work experience.
● 5+ years of experience on Data Warehousing, Data Engineering or Data Integration projects.
● Expert with data warehousing concepts, strategies, and tools.
● Strong SQL background.
● Strong knowledge of relational databases like SQL Server, PostgreSQL, MySQL.
● Strong experience in GCP & Google BigQuery, Cloud SQL, Composer (Airflow), Dataflow, Dataproc, Cloud Function and GCS
● Good to have knowledge on SQL Server Reporting Services (SSRS), and SQL Server Integration Services (SSIS).
● Knowledge of AWS and Azure Cloud is a plus.
● Experience in Informatica Power exchange for Mainframe, Salesforce, and other new-age data sources.
● Experience in integration using APIs, XML, JSONs etc.
📢 Job Title : MDM Business Analyst
📍 Location : On-site, Sydney, Australia
💼 Experience : 5+ Years
🗓 Notice Period : Immediate
Summary :
We are seeking a skilled MDM Business Analyst with hands-on experience in Atacama or Informatica MDM solutions. The ideal candidate will work closely with business and technical teams to define MDM strategies, ensure data quality, and drive enterprise data governance.
🔧 Key Responsibilities :
- Gather, analyze, and document business and data governance requirements.
- Design and support implementation of MDM solutions in collaboration with technical teams.
- Prepare BRDs, FRSs, and TDDs to translate business needs into technical specifications.
- Lead data profiling, modeling, cleansing, and mapping activities.
- Ensure effective MDM performance, hierarchy management, and workflow automation.
- Support UAT by defining test cases and validating data quality.
- Act as a liaison between business, IT, and vendors to ensure smooth project delivery.
✅ Required Skills :
- Bachelor’s in Computer Science, IT, or related field.
- 5+ years of experience in MDM-focused Business Analyst roles.
- Expertise in Atacama MDM / Informatica MDM (or similar tools).
- Strong SQL, data modeling, and data governance experience.
- Familiarity with relational databases (Oracle, SQL Server, PostgreSQL).
- Excellent communication, documentation, and stakeholder management skills.
➕ Nice to Have :
- Familiarity with Agile (Scrum/Kanban) methodologies.
- Experience with BI tools (e.g., Power BI, Tableau).
- Certifications in MDM, CBAP, PMP, or Informatica.
- Exposure to cloud MDM (AWS, Azure, GCP).
- Experience in finance, healthcare, or retail domains.

Key Responsibilities :
- Centralize structured and unstructured data
- Contribute to data strategy through data modeling, management, and governance
- Build, optimize, and maintain data pipelines and management frameworks
- Collaborate with cross-functional teams to develop scalable data and AI-driven solutions
- Take ownership of projects from ideation to production
Ideal Qualifications and Skills :
- Bachelor's degree in Computer Science or equivalent experience
- 8+ years of industry experience
- Strong expertise in data modeling and management concepts
- Experience with Snowflake, data warehousing, and data pipelines
- Proficiency in Python or another programming language
- Excellent communication, collaboration, and ownership mindset
- Foundational knowledge of API development and integration
Nice to Have :
- Experience with Tableau, Alteryx
- Master data management implementation experience
Success Factors :
- Strong technical foundation
- Collaborative mindset
- Ability to navigate complex data challenges
- Ownership mindset and startup-like culture fit
1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, BQ optimization, Airflow/Composer, Python(preferred)/Java
2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges
3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP
4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At Least 2 databases)
5. Data Warehouse concepts - Beginner to Intermediate level
6.Data Modeling, GCP Databases, DB Schema(or similar)
7.Hands-on data modelling for OLTP and OLAP systems
8.In-depth knowledge of Conceptual, Logical and Physical data modelling
9.Strong understanding of Indexing, partitioning, data sharding, with practical experience of having done the same
10.Strong understanding of variables impacting database performance for near-real-time reporting and application interaction.
11.Should have working experience on at least one data modelling tool,
preferably DBSchema, Erwin
12Good understanding of GCP databases like AlloyDB, CloudSQL, and
BigQuery.
13.People with functional knowledge of the mutual fund industry will be a plus Should be willing to work from Chennai, office presence is mandatory
Role & Responsibilities:
● Work with business users and other stakeholders to understand business processes.
● Ability to design and implement Dimensional and Fact tables
● Identify and implement data transformation/cleansing requirements
● Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform and load data from various systems to the Enterprise Data Warehouse
● Develop conceptual, logical, and physical data models with associated metadata including data lineage and technical data definitions
● Design, develop and maintain ETL workflows and mappings using the appropriate data load technique
● Provide research, high-level design, and estimates for data transformation and data integration from source applications to end-user BI solutions.
● Provide production support of ETL processes to ensure timely completion and availability of data in the data warehouse for reporting use.
● Analyze and resolve problems and provide technical assistance as necessary. Partner with the BI team to evaluate, design, develop BI reports and dashboards according to functional specifications while maintaining data integrity and data quality.
● Work collaboratively with key stakeholders to translate business information needs into well-defined data requirements to implement the BI solutions.
● Leverage transactional information, data from ERP, CRM, HRIS applications to model, extract and transform into reporting & analytics.
● Define and document the use of BI through user experience/use cases, prototypes, test, and deploy BI solutions.
● Develop and support data governance processes, analyze data to identify and articulate trends, patterns, outliers, quality issues, and continuously validate reports, dashboards and suggest improvements.
● Train business end-users, IT analysts, and developers.
Role & Responsibilities
Data Organization and Governance: Define and maintain governance standards that span multiple systems (AWS, Fivetran, Snowflake, PostgreSQL, Salesforce/nCino, Looker), ensuring that data remains accurate, accessible, and organized across the organization.
Solve Data Problems Proactively: Address recurring data issues that sidetrack operational and strategic initiatives by implementing processes and tools to anticipate, identify, and resolve root causes effectively.
System Integration: Lead the integration of diverse systems into a cohesive data environment, optimizing workflows and minimizing manual intervention.
Hands-On Problem Solving: Take a hands-on approach to resolving reporting issues and troubleshooting data challenges when necessary, ensuring minimal disruption to business operations.
Collaboration Across Teams: Work closely with business and technical stakeholders to understand and solve our biggest challenges
Mentorship and Leadership: Guide and mentor team members, fostering a culture of accountability and excellence in data management practices.
Strategic Data Support: Ensure that marketing, analytics, and other strategic initiatives are not derailed by data integrity issues, enabling the organization to focus on growth and innovation.
We are seeking a highly skilled and experienced Power BI Lead / Architect to join our growing team. The ideal candidate will have a strong understanding of data warehousing, data modeling, and business intelligence best practices. This role will be responsible for leading the design, development, and implementation of complex Power BI solutions that provide actionable insights to key stakeholders across the organization.
Location - Pune (Hybrid 3 days)
Responsibilities:
Lead the design, development, and implementation of complex Power BI dashboards, reports, and visualizations.
Develop and maintain data models (star schema, snowflake schema) for optimal data analysis and reporting.
Perform data analysis, data cleansing, and data transformation using SQL and other ETL tools.
Collaborate with business stakeholders to understand their data needs and translate them into effective and insightful reports.
Develop and maintain data pipelines and ETL processes to ensure data accuracy and consistency.
Troubleshoot and resolve technical issues related to Power BI dashboards and reports.
Provide technical guidance and mentorship to junior team members.
Stay abreast of the latest trends and technologies in the Power BI ecosystem.
Ensure data security, governance, and compliance with industry best practices.
Contribute to the development and improvement of the organization's data and analytics strategy.
May lead and mentor a team of junior Power BI developers.
Qualifications:
8-12 years of experience in Business Intelligence and Data Analytics.
Proven expertise in Power BI development, including DAX, advanced data modeling techniques.
Strong SQL skills, including writing complex queries, stored procedures, and views.
Experience with ETL/ELT processes and tools.
Experience with data warehousing concepts and methodologies.
Excellent analytical, problem-solving, and communication skills.
Strong teamwork and collaboration skills.
Ability to work independently and proactively.
Bachelor's degree in Computer Science, Information Systems, or a related field preferred.

Senior Data Analyst
Experience: 8+ Years
Work Mode: Remote Full Time
Responsibilities:
• Analyze large datasets to uncover trends, patterns, and insights to support business goals.
• Design, develop, and manage interactive dashboards and reports using Power BI.
• Utilize DAX and SQL for advanced data querying and data modeling.
• Create and manage complex SQL queries for data extraction, transformation, and loading processes.
• Collaborate with cross-functional teams to understand data requirements and translate them into actionable solutions.
• Maintain data accuracy and integrity across projects, ensuring reliable data-driven insights.
• Present findings to stakeholders, translating complex data insights into simple, actionable business recommendations.
Skills:
Power BI, DAX (Data Analysis Expressions), SQL, Data Modeling, Python
Preferred Skills:
• Machine Learning: Exposure to machine learning models and their integration within analytical solutions.
• Microsoft Fabric: Familiarity with Microsoft Fabric for enhanced data integration and management.

Job Description for Data Engineer Role:-
Must have:
Experience working with Programming languages. Solid foundational and conceptual knowledge is expected.
Experience working with Databases and SQL optimizations
Experience as a team lead or a tech lead and is able to independently drive tech decisions and execution and motivate the team in ambiguous problem spaces.
Problem Solving, Judgement and Strategic decisioning skills to be able to drive the team forward.
Role and Responsibilities:
- Share your passion for staying on top of tech trends, experimenting with and learning new technologies, participating in internal & external technology communities, mentoring other members of the engineering community, and from time to time, be asked to code or evaluate code
- Collaborate with digital product managers, and leaders from other team to refine the strategic needs of the project
- Utilize programming languages like Java, Python, SQL, Node, Go, and Scala, Open Source RDBMS and NoSQL databases
- Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards
Qualifications -
- Experience with SQL and NoSQL databases.
- Experience with cloud platforms, preferably AWS.
- Strong experience with data warehousing and data lake technologies (Snowflake)
- Expertise in data modelling
- Experience with ETL/LT tools and methodologies
- Experience working on real-time Data Streaming and Data Streaming platforms
- 2+ years of experience in at least one of the following: Java, Scala, Python, Go, or Node.js
- 2+ years working with SQL and NoSQL databases, data modeling and data management
- 2+ years of experience with AWS, GCP, Azure, or another cloud service.
The Sr. Analytics Engineer would provide technical expertise in needs identification, data modeling, data movement, and transformation mapping (source to target), automation and testing strategies, translating business needs into technical solutions with adherence to established data guidelines and approaches from a business unit or project perspective.
Understands and leverages best-fit technologies (e.g., traditional star schema structures, cloud, Hadoop, NoSQL, etc.) and approaches to address business and environmental challenges.
Provides data understanding and coordinates data-related activities with other data management groups such as master data management, data governance, and metadata management.
Actively participates with other consultants in problem-solving and approach development.
Responsibilities :
Provide a consultative approach with business users, asking questions to understand the business need and deriving the data flow, conceptual, logical, and physical data models based on those needs.
Perform data analysis to validate data models and to confirm the ability to meet business needs.
Assist with and support setting the data architecture direction, ensuring data architecture deliverables are developed, ensuring compliance to standards and guidelines, implementing the data architecture, and supporting technical developers at a project or business unit level.
Coordinate and consult with the Data Architect, project manager, client business staff, client technical staff and project developers in data architecture best practices and anything else that is data related at the project or business unit levels.
Work closely with Business Analysts and Solution Architects to design the data model satisfying the business needs and adhering to Enterprise Architecture.
Coordinate with Data Architects, Program Managers and participate in recurring meetings.
Help and mentor team members to understand the data model and subject areas.
Ensure that the team adheres to best practices and guidelines.
Requirements :
- Strong working knowledge of at least 3 years of Spark, Java/Scala/Pyspark, Kafka, Git, Unix / Linux, and ETL pipeline designing.
- Experience with Spark optimization/tuning/resource allocations
- Excellent understanding of IN memory distributed computing frameworks like Spark and its parameter tuning, writing optimized workflow sequences.
- Experience of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., Redshift, Bigquery, Cassandra, etc).
- Familiarity with Docker, Kubernetes, Azure Data Lake/Blob storage, AWS S3, Google Cloud storage, etc.
- Have a deep understanding of the various stacks and components of the Big Data ecosystem.
- Hands-on experience with Python is a huge plus
Responsibilities:
- Design, implement, and maintain scalable and reliable database solutions on the AWS platform.
- Architect, deploy, and optimize DynamoDB databases for performance, scalability, and cost-efficiency.
- Configure and manage AWS OpenSearch (formerly Amazon Elasticsearch Service) clusters for real-time search and analytics capabilities.
- Design and implement data processing and analytics solutions using AWS EMR (Elastic MapReduce) for large-scale data processing tasks.
- Collaborate with cross-functional teams to gather requirements, design database solutions, and implement best practices.
- Perform performance tuning, monitoring, and troubleshooting of database systems to ensure high availability and performance.
- Develop and maintain documentation, including architecture diagrams, configurations, and operational procedures.
- Stay current with the latest AWS services, database technologies, and industry trends to provide recommendations for continuous improvement.
- Participate in the evaluation and selection of new technologies, tools, and frameworks to enhance database capabilities.
- Provide guidance and mentorship to junior team members, fostering knowledge sharing and skill development.
Requirements:
- Bachelor’s degree in computer science, Information Technology, or related field.
- Proven experience as an AWS Architect or similar role, with a focus on database technologies.
- Hands-on experience designing, implementing, and optimizing DynamoDB databases in production environments.
- In-depth knowledge of AWS OpenSearch (Elasticsearch) and experience configuring and managing clusters for search and analytics use cases.
- Proficiency in working with AWS EMR (Elastic MapReduce) for big data processing and analytics.
- Strong understanding of database concepts, data modelling, indexing, and query optimization.
- Experience with AWS services such as S3, EC2, RDS, Redshift, Lambda, and CloudFormation.
- Excellent problem-solving skills and the ability to troubleshoot complex database issues.
- Solid understanding of cloud security best practices and experience implementing security controls in AWS environments.
- Strong communication and collaboration skills with the ability to work effectively in a team environment.
- AWS certifications such as AWS Certified Solutions Architect, AWS Certified Database - Specialty, or equivalent certifications are a plus.
What’s in it for you?
Opportunity To Unlock Your Creativity
Think of all the times you were held back from trying new ideas because you were boxed in by bureaucratic legacy processes or old school tactics. Having a growth mindset is deeply ingrained into our company culture since day 1 so Fictiv is an environment where you have the creative liberty and support of the team to try big bold ideas to achieve our sales and customer goals.
Opportunity To Grow Your Career
At Fictiv, you'll be surrounded by supportive teammates who will push you to be your best through their curiosity and passion.
Opportunity To Unlock Your Creativity
Think of all the times you were held back from trying new ideas because you were boxed in by bureaucratic legacy processes or old school tactics. Having a growth mindset is deeply ingrained into our company culture since day 1 so Fictiv is an environment where you have the creative liberty and support of the team to try big bold ideas to achieve our sales and customer goals.
Impact In This Role
Excellent problem solving, decision-making and critical thinking skills.
Collaborative, a team player.
Excellent verbal and written communication skills.
Exhibits initiative, integrity and empathy.
Enjoy working with a diverse group of people in multiple regions.
Comfortable not knowing answers, but resourceful and able to resolve issues.
Self-starter; comfortable with ambiguity, asking questions and constantly learning.
Customer service mentality; advocates for another person's point of view.
Methodical and thorough in written documentation and communication.
Culture oriented; wants to work with people rather than in isolation.
You will report to the Director of IT Engineering
What You’ll Be Doing
- Interface with Business Analysts and Stakeholders to understand & clarify requirements
- Develop technical design for solutions Development
- Implement high quality, scalable solutions following best practices, including configuration and code.
- Deploy solutions and code using automated deployment tools
- Take ownership of technical deliverables, ensure that quality work is completed, fully tested, delivered on time.
- Conduct code reviews, optimization, and refactoring to minimize technical debt within Salesforce implementations.
- Collaborate with cross-functional teams to integrate Salesforce with other systems and platforms, ensuring seamless data flow and system interoperability.
- Identify opportunities for process improvements, mentor and support other developers/team members as needed.
- Stay updated on new Salesforce features and functionalities and provide recommendations for process improvements.
Desired Traits
- 8-10 years of experience in Salesforce development
- Proven experience in developing Salesforce solutions with a deep understanding of Apex, Visualforce, Lightning Web Components, and Salesforce APIs.
- Have worked in Salesforce CPQ, Sales/Manufacturing Cloud, Case Management
- Experienced in designing and implementing custom solutions that align with business needs.
- Strong knowledge of Salesforce data modeling, reporting, and database design.
- Demonstrated experience in building and maintaining integrations between Salesforce and external applications.
- Strong unit testing, functional testing and debugging skills
- Strong understanding of best practices
- Active Salesforce Certifications are desirable.
- Experience in Mulesoft is a plus
- Excellent communication skills and the ability to translate complex technical requirements into actionable solutions.
Interested in learning more? We look forward to hearing from you soon.
Responsibilities -
- Collaborate with the development team to understand data requirements and identify potential scalability issues.
- Design, develop, and implement scalable data pipelines and ETL processes to ingest, process, and analyse large - volumes of data from various sources.
- Optimize data models and database schemas to improve query performance and reduce latency.
- Monitor and troubleshoot the performance of our Cassandra database on Azure Cosmos DB, identifying bottlenecks and implementing optimizations as needed.
- Work with cross-functional teams to ensure data quality, integrity, and security.
- Stay up to date with emerging technologies and best practices in data engineering and distributed systems.
Qualifications & Requirements -
- Proven experience as a Data Engineer or similar role, with a focus on designing and optimizing large-scale data systems.
- Strong proficiency in working with NoSQL databases, particularly Cassandra.
- Experience with cloud-based data platforms, preferably Azure Cosmos DB.
- Solid understanding of Distributed Systems, Data modelling, Data Warehouse Designing, and ETL Processes.
- Detailed understanding of Software Development Life Cycle (SDLC) is required.
- Good to have knowledge on any visualization tool like Power BI, Tableau.
- Good to have knowledge on SAP landscape (SAP ECC, SLT, BW, HANA etc).
- Good to have experience on Data Migration Project.
- Knowledge of Supply Chain domain would be a plus.
- Familiarity with software architecture (data structures, data schemas, etc.)
- Familiarity with Python programming language is a plus.
- The ability to work in a dynamic, fast-paced, work environment.
- A passion for data and information with strong analytical, problem solving, and organizational skills.
- Self-motivated with the ability to work under minimal direction.
- Strong communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
Role Title: Developer - Guidewire Integration-Config
Role Purpose
We are looking for a Developer for our Claims Guidewire team, who is a technology enthusiast, and eager to be part of a culture of modern software engineering practices, continuous improvement, and innovation.
As a Developer, you will be part of a dynamic engineering team and work on development, maintenance, and transformation of our strategic Claims Guidewire platform. You will learn about software applications, technology stack, ways of working and standards.
Key Accountabilities
· Deliver software development tasks for Claims Guidewire applications, in the areas of Integration and Configuration, with expected quality measures and timeframe, e.g., coding, writing unit test cases (G-Unit) and unit testing, debugging and defect fixing, providing test support, providing release support.
· Communicate with technical leads and IT groups for understanding the project’s technical implications, dependencies, and potential conflicts.
· Research issues reported in Production, perform root cause analysis and document them, respond to and resolve technical issues in a timely manner.
· Perform versioning of the release updates and resolve the code conflicts while merging and promoting the code to higher environments.
· Develop their technical and functional knowledge on Claims Digital Guidewire platform.
· Understand and follow Guidewire’s cloud standards for application development.
· Active participation in team meetings like daily stand-ups, risk forums, planning sessions and retrospectives.
Skills & Experience
· 3+ years of development experience on Guidewire cloud platform and applications, Guidewire certification preferred.
· Hands on development expertise on Guidewire ClaimCentre with configuration and integration
· Experience in Guidewire platform (Gosu scripting / Edge APIs / UI / Data Model)
· Should have knowledge on Admin data loading, Assignment and Segmentation Rules, Pre-update and Validation rules, Authority limits, Financials (checks, reserves, recoveries …)
· Good experience on LOB configuration and related type-lists
· Good experience on integration components including plug-ins, messaging (and supporting business rules), batches, REST APIs and programs that call the Guidewire application APIs.
· Experience on any database Oracle / SQL Server and well versed in SQL.
· Experience of working in a CI/CD setup and related tools/technologies
· Insurance domain knowledge with Property & Casualty background preferred.
Location- Gurugram
CTC- Upto 25lpa

Key Roles/Responsibilities: –
• Develop an understanding of business obstacles, create
• solutions based on advanced analytics and draw implications for
• model development
• Combine, explore and draw insights from data. Often large and
• complex data assets from different parts of the business.
• Design and build explorative, predictive- or prescriptive
• models, utilizing optimization, simulation and machine learning
• techniques
• Prototype and pilot new solutions and be a part of the aim
• of ‘productifying’ those valuable solutions that can have impact at a
• global scale
• Guides and coaches other chapter colleagues to help solve
• data/technical problems at an operational level, and in
• methodologies to help improve development processes
• Identifies and interprets trends and patterns in complex data sets to
• enable the business to take data-driven decisions
Minimum of 8 years of experience of which, 4 years should be of applied data mining
experience in disciplines such as Call Centre Metrics.
Strong experience in advanced statistics and analytics including segmentation, modelling, regression, forecasting etc.
Experience with leading and managing large teams.
Demonstrated pattern of success in using advanced quantitative analytic methods to solve business problems.
Demonstrated experience with Business Intelligence/Data Mining tools to work with
data, investigate anomalies, construct data sets, and build models.
Critical to share details on projects undertaken (preferably on telecom industry)
specifically through analysis from CRM.

About the role:
Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.
Here’s what will be expected out of you:
➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.
➢ Develop data pipelines that make data available across platforms.
➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.
➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.
➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.
What we want:
➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.
➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.
➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).
➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.
➢ Good understanding of orchestration tools like Airflow.
➢ Strong Python and SQL coding skills.
➢ Strong Experience in distributed systems like spark.
➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).
➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.
Note :
Product based companies, Ecommerce companies is added advantage

Requirements:
- 2+ years of experience (4+ for Senior Data Engineer) with system/data integration, development or implementation of enterprise and/or cloud software Engineering degree in Computer Science, Engineering or related field.
- Extensive hands-on experience with data integration/EAI technologies (File, API, Queues, Streams), ETL Tools and building custom data pipelines.
- Demonstrated proficiency with Python, JavaScript and/or Java
- Familiarity with version control/SCM is a must (experience with git is a plus).
- Experience with relational and NoSQL databases (any vendor) Solid understanding of cloud computing concepts.
- Strong organisational and troubleshooting skills with attention to detail.
- Strong analytical ability, judgment and problem-solving techniques Interpersonal and communication skills with the ability to work effectively in a cross functional team.


Lifespark is looking for individuals with a passion for impacting real lives through technology. Lifespark is one of the most promising startups in the Assistive Tech space in India, and has been honoured with several National and International awards. Our mission is to create seamless, persistent and affordable healthcare solutions. If you are someone who is driven to make a real impact in this world, we are your people.
Lifespark is currently building solutions for Parkinson’s Disease, and we are looking for a ML lead to join our growing team. You will be working directly with the founders on high impact problems in the Neurology domain. You will be solving some of the most fundamental and exciting challenges in the industry and will have the ability to see your insights turned into real products every day
Essential experience and requirements:
1. Advanced knowledge in the domains of computer vision, deep learning
2. Solid understand of Statistical / Computational concepts like Hypothesis Testing, Statistical Inference, Design of Experiments and production level ML system design
3. Experienced with proper project workflow
4. Good at collating multiple datasets (potentially from different sources)
5. Good understanding of setting up production level data pipelines
6. Ability to independently develop and deploy ML systems to various platforms (local and cloud)
7. Fundamentally strong with time-series data analysis, cleaning, featurization and visualisation
8. Fundamental understanding of model and system explainability
9. Proactive at constantly unlearning and relearning
10. Documentation ninja - can understand others documentation as well as create good documentation
Responsibilities :
1. Develop and deploy ML based systems built upon healthcare data in the Neurological domain
2. Maintain deployed systems and upgrade them through online learning
3. Develop and deploy advanced online data pipelines
- Big data developer with 8+ years of professional IT experience with expertise in Hadoop ecosystem components in ingestion, Data modeling, querying, processing, storage, analysis, Data Integration and Implementing enterprise level systems spanning Big Data.
- A skilled developer with strong problem solving, debugging and analytical capabilities, who actively engages in understanding customer requirements.
- Expertise in Apache Hadoop ecosystem components like Spark, Hadoop Distributed File Systems(HDFS), HiveMapReduce, Hive, Sqoop, HBase, Zookeeper, YARN, Flume, Pig, Nifi, Scala and Oozie.
- Hands on experience in creating real - time data streaming solutions using Apache Spark core, Spark SQL & DataFrames, Kafka, Spark streaming and Apache Storm.
- Excellent knowledge of Hadoop architecture and daemons of Hadoop clusters, which include Name node,Data node, Resource manager, Node Manager and Job history server.
- Worked on both Cloudera and Horton works in Hadoop Distributions. Experience in managing Hadoop clustersusing Cloudera Manager tool.
- Well versed in installation, Configuration, Managing of Big Data and underlying infrastructure of Hadoop Cluster.
- Hands on experience in coding MapReduce/Yarn Programs using Java, Scala and Python for analyzing Big Data.
- Exposure to Cloudera development environment and management using Cloudera Manager.
- Extensively worked on Spark using Scala on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL/Oracle .
- Implemented Spark using PYTHON and utilizing Data frames and Spark SQL API for faster processing of data and handled importing data from different data sources into HDFS using Sqoop and performing transformations using Hive, MapReduce and then loading data into HDFS.
- Used Spark Data Frames API over Cloudera platform to perform analytics on Hive data.
- Hands on experience in MLlib from Spark which are used for predictive intelligence, customer segmentation and for smooth maintenance in Spark streaming.
- Experience in using Flume to load log files into HDFS and Oozie for workflow design and scheduling.
- Experience in optimizing MapReduce jobs to use HDFS efficiently by using various compression mechanisms.
- Working on creating data pipeline for different events of ingestion, aggregation, and load consumer response data into Hive external tables in HDFS location to serve as feed for tableau dashboards.
- Hands on experience in using Sqoop to import data into HDFS from RDBMS and vice-versa.
- In-depth Understanding of Oozie to schedule all Hive/Sqoop/HBase jobs.
- Hands on expertise in real time analytics with Apache Spark.
- Experience in converting Hive/SQL queries into RDD transformations using Apache Spark, Scala and Python.
- Extensive experience in working with different ETL tool environments like SSIS, Informatica and reporting tool environments like SQL Server Reporting Services (SSRS).
- Experience in Microsoft cloud and setting cluster in Amazon EC2 & S3 including the automation of setting & extending the clusters in AWS Amazon cloud.
- Extensively worked on Spark using Python on cluster for computational (analytics), installed it on top of Hadoop performed advanced analytical application by making use of Spark with Hive and SQL.
- Strong experience and knowledge of real time data analytics using Spark Streaming, Kafka and Flume.
- Knowledge in installation, configuration, supporting and managing Hadoop Clusters using Apache, Cloudera (CDH3, CDH4) distributions and on Amazon web services (AWS).
- Experienced in writing Ad Hoc queries using Cloudera Impala, also used Impala analytical functions.
- Experience in creating Data frames using PySpark and performing operation on the Data frames using Python.
- In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS and MapReduce Programming Paradigm, High Availability and YARN architecture.
- Establishing multiple connections to different Redshift clusters (Bank Prod, Card Prod, SBBDA Cluster) and provide the access for pulling the information we need for analysis.
- Generated various kinds of knowledge reports using Power BI based on Business specification.
- Developed interactive Tableau dashboards to provide a clear understanding of industry specific KPIs using quick filters and parameters to handle them more efficiently.
- Well Experience in projects using JIRA, Testing, Maven and Jenkins build tools.
- Experienced in designing, built, and deploying and utilizing almost all the AWS stack (Including EC2, S3,), focusing on high-availability, fault tolerance, and auto-scaling.
- Good experience with use-case development, with Software methodologies like Agile and Waterfall.
- Working knowledge of Amazon's Elastic Cloud Compute( EC2 ) infrastructure for computational tasks and Simple Storage Service ( S3 ) as Storage mechanism.
- Good working experience in importing data using Sqoop, SFTP from various sources like RDMS, Teradata, Mainframes, Oracle, Netezza to HDFS and performed transformations on it using Hive, Pig and Spark .
- Extensive experience in Text Analytics, developing different Statistical Machine Learning solutions to various business problems and generating data visualizations using Python and R.
- Proficient in NoSQL databases including HBase, Cassandra, MongoDB and its integration with Hadoop cluster.
- Hands on experience in Hadoop Big data technology working on MapReduce, Pig, Hive as Analysis tool, Sqoop and Flume data import/export tools.
Job Title: AWS-Azure Data Engineer with Snowflake
Location: Bangalore, India
Experience: 4+ years
Budget: 15 to 20 LPA
Notice Period: Immediate joiners or less than 15 days
Job Description:
We are seeking an experienced AWS-Azure Data Engineer with expertise in Snowflake to join our team in Bangalore. As a Data Engineer, you will be responsible for designing, implementing, and maintaining data infrastructure and systems using AWS, Azure, and Snowflake. Your primary focus will be on developing scalable and efficient data pipelines, optimizing data storage and processing, and ensuring the availability and reliability of data for analysis and reporting.
Responsibilities:
- Design, develop, and maintain data pipelines on AWS and Azure to ingest, process, and transform data from various sources.
- Optimize data storage and processing using cloud-native services and technologies such as AWS S3, AWS Glue, Azure Data Lake Storage, Azure Data Factory, etc.
- Implement and manage data warehouse solutions using Snowflake, including schema design, query optimization, and performance tuning.
- Collaborate with cross-functional teams to understand data requirements and translate them into scalable and efficient technical solutions.
- Ensure data quality and integrity by implementing data validation, cleansing, and transformation processes.
- Develop and maintain ETL processes for data integration and migration between different data sources and platforms.
- Implement and enforce data governance and security practices, including access control, encryption, and compliance with regulations.
- Collaborate with data scientists and analysts to support their data needs and enable advanced analytics and machine learning initiatives.
- Monitor and troubleshoot data pipelines and systems to identify and resolve performance issues or data inconsistencies.
- Stay updated with the latest advancements in cloud technologies, data engineering best practices, and emerging trends in the industry.
Requirements:
- Bachelor's or Master's degree in Computer Science, Information Systems, or a related field.
- Minimum of 4 years of experience as a Data Engineer, with a focus on AWS, Azure, and Snowflake.
- Strong proficiency in data modelling, ETL development, and data integration.
- Expertise in cloud platforms such as AWS and Azure, including hands-on experience with data storage and processing services.
- In-depth knowledge of Snowflake, including schema design, SQL optimization, and performance tuning.
- Experience with scripting languages such as Python or Java for data manipulation and automation tasks.
- Familiarity with data governance principles and security best practices.
- Strong problem-solving skills and ability to work independently in a fast-paced environment.
- Excellent communication and interpersonal skills to collaborate effectively with cross-functional teams and stakeholders.
- Immediate joiner or notice period less than 15 days preferred.
If you possess the required skills and are passionate about leveraging AWS, Azure, and Snowflake to build scalable data solutions, we invite you to apply. Please submit your resume and a cover letter highlighting your relevant experience and achievements in the AWS, Azure, and Snowflake domains.
Experience Required: 5 -10 yrs.
Job location: Sec-62, Noida
Work from office (Hybrid)
Development Platform: Backend Development- Java/J2EE, Struts, Spring, MySQL, OWASP
Job Brief:
Requirements:
· 5+ years of experience in developing distributed, multi-tier enterprise applications, APIs.
· Fully participated in several major product development cycles.
· Solid background in design, OOP, object, and data modelling.
· Deep working knowledge of Java, Struts,Spring, Relational Database.
· Experience in design and implementation of service interface and public APIs.
· Actively involved/writing codes in current project.
· Development knowledge and experience of working with AWS, Azure etc. will be an added plus.
· Clear Understanding and Hands on experience on OWASP Top 10 Vulnerability standards like XSS, CSRF, SQL injection, session hijacking, and authorization bypass vulnerabilities.
· Find and resolve the security concerns on the product/application.
· Good Documentation, reporting, Strong communication, and collaboration skills with various levels of executives from top management to technical team members across the organization.
· Strong self-starter who can operate independently.
DATA ENGINEER
Overview
They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.
Job Description:
We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.
Responsibilities:
Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.
Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.
Optimize and tune the performance of data systems to ensure efficient data processing and analysis.
Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.
Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.
Implement and maintain data governance and security measures to protect sensitive data.
Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.
Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.
Qualifications:
Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.
Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.
Strong programming skills in languages such as Python, Java, or Scala.
Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.
Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).
Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).
Solid understanding of data modeling, data warehousing, and ETL principles.
Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).
Strong problem-solving and analytical skills, with the ability to handle complex data challenges.
Excellent communication and collaboration skills to work effectively in a team environment.
Preferred Qualifications:
Advanced knowledge of distributed computing and parallel processing.
Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).
Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).
Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
Experience with data visualization and reporting tools (e.g., Tableau, Power BI).
Certification in relevant technologies or data engineering disciplines.
RequiredSkills:
• Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience inExtraction, Transformation and Loading ETLwork using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developerinOracle, MS SQLor another enterprise database with a focus on building data integration process • Candidate should haveanyNoSqltechnology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with BigDataplatforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding of data warehousing concepts and decision support systems.
• Ability to deal with sensitive and confidential material and adhere to worldwide data security and • Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skills.

Responsibilities
This role requires a person to support business charters & accompanying products by aligning with the Analytics
Manager’s vision, understanding tactical requirements and helping in successful execution. Split would be approx.
70% management + 30% individual contributor. Responsibilities include
Project Management
- Understand business needs and objectives.
- Refine use cases and plan iterations and deliverables - able to pivot as required.
- Estimate efforts and conduct regular task updates to ensure timeline adherence.
- Set and manage stakeholder expectations as required
Quality Execution
- Help BA and SBA resources with requirement gathering and final presentations.
- Resolve blockers regarding technical challenges and decision-making.
- Check final deliverables for correctness and review codes, along with Manager.
KPIs and metrics
- Orchestrate metrics building, maintenance, and performance monitoring.
- Owns and manages data models, data sources, and data definition repo.
- Makes low-level design choices during execution.
Team Nurturing
- Help Analytics Manager during regular one-on-ones + check-ins + recruitment.
- Provide technical guidance whenever required.
- Improve benchmarking and decision-making skills at execution-level.
- Train and get new resources up-to-speed.
- Knowledge building (methodologies) to better position the team for complex problems.
Communication
- Upstream to document and discuss execution challenges, process inefficiencies, and feedback loops.
- Downstream and parallel for context-building, mentoring, stakeholder management.
Analytics Stack
- Analytics : Python / R + SQL + Excel / PPT, Colab notebooks
- Database : PostgreSQL, Amazon Redshift, DynamoDB, Aerospike
- Warehouse : Amazon Redshift
- ETL : Lots of Python + custom-made
- Business Intelligence / Visualization : Metabase + Python/R libraries (location data)
- Deployment pipeline : Docker, Git, Jenkins, AWS Lambda
Roles and Responsibilities:
Perform detailed feature requirements analysis along with a team of Senior Developers,
define system functionality, work on system design and document the same
● Design/Develop/Improve Cogno AI’s backend infrastructure and stack and build faulttolerant, scalable and real-time distributed system
● Own the design, development and deployment of code to improve product and platform
functionality
● Taking initiative and giving ideas for improving the processes in the technology team
would lead to better performance of the team and result in robust solutions
● Writing high-performance, reliable and maintainable code
● Support team with timely analysis and debugging of operational issues
● Emphasis on automation and scripting
● Cross-functional communication to deliver projects
● Mentor junior team members technically and manage a team of software engineers
● Taking interviews and making tests for hiring people in the technology team
What do we look for?
The following are the important eligibility requirements for this Job:
● Bachelor's or Master's degree in computer science or equivalent.
● 5+ years of experience working as a software engineer, preferably in a product-based
company.
● Experience working with major cloud solutions AWS (preferred), Azure, and GCP.
● Familiarity with 3-Tier, microservices architecture and distributed systems
● Experience with the design & development of RESTful services
● Experience with developing Linux-based applications, networking and scripting.
● Experience with different data stores, data modelling and scaling them
● Familiarity with data stores such as PostgreSQL, MySQL, Mongo-DB etc.
● 4+ years of experience with web frameworks (preferably Django, Flask etc.)
● Good understanding of data structures, multi-threading and concurrency concepts.
● Experience with DevOps tools like Jenkins, Ansible, Kubernetes, and Git is a plus.
● Familiarity with elastic search queries and visualization tools like grafana, kibana
● Strong networking fundamentals: Firewalls, Proxies, DNS, Load Balancing, etc.
● Strong analytical and problem-solving skills.
● Excellent written and verbal communication skills.
● Team player, flexible and able to work in a fast-paced environment.
● End-to-end ownership of the product. You own what you develop.

What is the role?
Expected to manage the product plan, engineering, and delivery of Xoxoday Plum. Plum is a rewarding and incentives infrastructure for businesses. It's a unified integrated suite of products to handle various rewarding use cases for consumers, sales, channel partners, and employees. 31% of the total tech team is aligned towards this product and comprises of 32 members within Plum Tech, Quality, Design, and Product management. The annual FY 2019-20 revenue for Plum was $ 40MN and is showing high growth potential this year as well. The product has a good mix of both domestic and international clientele and is expanding. The role will be based out of our head office in Bangalore, Karnataka however we are open to discuss the option of remote working with 25 - 50% travel.
Key Responsibilities
- Scope and lead technology with the right product and business metrics.
- Directly contribute to product development by writing code if required.
- Architect systems for scale and stability.
- Serve as a role model for our high engineering standards and bring consistency to the many codebases and processes you will encounter.
- Collaborate with stakeholders across disciplines like sales, customers, product, design, and customer success.
- Code reviews and feedback.
- Build simple solutions and designs over complex ones, and have a good intuition for what is lasting and scalable.
- Define a process for maintaining a healthy engineering culture ( Cadence for one-on-ones, meeting structures, HLDs, Best Practices In development, etc).
What are we looking for?
- Manage a senior tech team of more than 5 direct and 25 indirect developers.
- Should have experience in handling e-commerce applications at scale.
- Should have at least 7+ years of experience in software development, agile processes for international e-commerce businesses.
- Should be extremely hands-on, full-stack developer with modern architecture.
- Should exhibit skills to build a good engineering team and culture.
- Should be able to handle the chaos with product planning, prioritizing, customer-first approach.
- Technical proficiency
- JavaScript, SQL, NoSQL, PHP
- Frameworks like React, ReactNative, Node.js, GraphQL
- Databases technologies like ElasticSearch, Redis, MySql, Cassandra, MongoDB, Kafka
- Dev ops to manage and architect infra - AWS, CI/CD (Jenkins)
- System Architecture w.r.t Microservices, Cloud Development, DB Administration, Data Modeling
- Understanding of security principles and possible attacks and mitigate them.
Whom will you work with?
You will lead the Plum Engineering team and work in close conjunction with the Tech leads of Plum with some cross-functional stake with other products. You'll report to the co-founder directly.
What can you look for?
A wholesome opportunity in a fast-paced environment with scale, international flavour, backend, and frontend. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.
We are
A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore, and Dublin. We have three products in our portfolio: Plum, Empuls, and Compass. Xoxoday works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners, or consumers for better business results.
Way forward
We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.
As an engineer, you will help with the implementation, and launch of many key product features. You will get an opportunity to work on a wide range of technologies (including Spring, AWS Elastic Search, Lambda, ECS, Redis, Spark, Kafka etc.) and apply new technologies for solving problems. You will have an influence on defining product features, drive operational excellence, and spearhead the best practices that enable a quality product. You will get to work with skilled and motivated engineers who are already contributing to building high-scale and high-available systems.
If you are looking for an opportunity to work on leading technologies and would like to build product technology that can cater millions of customers inclined towards providing them the best experience, and relish large ownership and diverse technologies, join our team today!
What You'll Do:
- Creating detailed design, working on development and performing code reviews.
- Implementing validation and support activities in line with architecture requirements
- Help the team translate the business requirements into R&D tasks and manage the roadmap of the R&D tasks.
- Designing, building, and implementation of the product; participating in requirements elicitation, validation of architecture, creation and review of high and low level design, assigning and reviewing tasks for product implementation.
- Work closely with product managers, UX designers and end users and integrating software components into a fully functional system
- Ownership of product/feature end-to-end for all phases from the development to the production.
- Ensuring the developed features are scalable and highly available with no quality concerns.
- Work closely with senior engineers for refining the and implementation.
- Management and execution against project plans and delivery commitments.
- Assist directly and indirectly in the continual hiring and development of technical talent.
- Create and execute appropriate quality plans, project plans, test strategies and processes for development activities in concert with business and project management efforts.
The ideal candidate is a passionate engineer about delivering experiences that delight customers and creating solutions that are robust. He/she should be able to commit and own the deliveries end-to-end.
What You'll Need:
- A Bachelor's degree in Computer Science or related technical discipline.
- 2-3+ years of Software Development experience with proficiency in Java or equivalent object-oriented languages, coupled with design and SOA
- Fluency with Java, and Spring is good.
- Experience in JEE applications and frameworks like struts, spring, mybatis, maven, gradle
- Strong knowledge of Data Structures, Algorithms and CS fundamentals.
- Experience in at least one shell scripting language, SQL, SQL Server, PostgreSQL and data modeling skills
- Excellent analytical and reasoning skills
- Ability to learn new domains and deliver output
- Hands on Experience with the core AWS services
- Experience working with CI/CD tools (Jenkins, Spinnaker, Nexus, GitLab, TeamCity, GoCD, etc.)
- Expertise in at least one of the following:
- Kafka, ZeroMQ, AWS SNS/SQS, or equivalent streaming technology
- Distributed cache/in memory data grids like Redis, Hazelcast, Ignite, or Memcached
- Distributed column store databases like Snowflake, Cassandra, or HBase
- Spark, Flink, Beam, or equivalent streaming data processing frameworks
- Proficient with writing and reviewing Python and other object-oriented language(s) are a plus
- Experience building automations and CICD pipelines (integration, testing, deployment)
- Experience with Kubernetes would be a plus.
- Good understanding of working with distributed teams using Agile: Scrum, Kanban
- Strong interpersonal skills as well as excellent written and verbal communication skills
• Attention to detail and quality, and the ability to work well in and across teams

Are you bored of writing banking apps, making people click on more ads, or re-skinning or making clones
of Temple Run.
Did you ever think you could use your skills to change the world?
If you consider yourself as more of an artist who paints on the technology canvas, we want you!!!
GOQii is your chance to work with an amazing team at GOQii who are driven by passion and here to
disrupt the wearable technology & fitness space.
Roles and Responsibilities:
Relevant Experience on native App Development.
Solid understanding of the full mobile development life cycle.
UI development with latest framework and techniques.
Understanding of asynchronous client/server interfacing.
Solid grip on SQLite and data modelling.
Experience with 3rd party libraries & APIs - Social, Payment, Network, Crash, and Analytics etc.
Experience in handling the performance and memory of App using various tools.
Focus on building high performance, stable and maintainable code.
A good logical and analytical skills.
Experience with Git / SVN version control software.
Thorough understanding of OOP concepts.
Proficient with Java and Kotlin.
Clear understanding of Android SDK, Android studio, APIs, DBs, Material Design.
Realm and Room database.
Understanding of design patterns.
Background task, threading concept.


Position Description:
TTEC Digital is looking for enthusiastic Developers for Genesys Contact Center products and custom developed Cloud solutions. As a Developer, you will function as an active member of the Development team in the following phases of a project’s lifecycle: Design, Build, Deploy, Accept, web and windows services, API’s and applications that integrate with our customers back end CRM systems, databases, and external 3rd party API’s.
Responsibilities:
- Works with customers as needed to translate design requirements into application solutions, ensuring the requirements are met according to the team’s and practice area’s standards and best practices.
- Communicates with project manager/client to identify application requirements.
- Ensures applications meet the standards and requirements of both the client and project manager.
- Conducts tests of the application for functionality, reliability and stabilization.
- Deploys/implements the application to the client.
- Maintains and supports existing applications by fixing problems, addressing issues and determining the need for enhancements.
- Demonstrates concern for meeting client needs in a manner that provides satisfaction and excellent results for the client, leading to additional opportunities within the client account.
- Performs all tasks within the budget and on time while meeting all necessary project requirements. Communicates regularly if budget and/or scope changes.
- Demonstrate professionalism and leadership in representing the Company to customers and vendors.
- Core PureConnect handler development & maintenance.
- Monitor and respond to system errors. Participate in on-call rotation.
- Follow-up on and resolve outstanding issues in a timely manner.
- Update customer to reflect changes in system configuration as needed.
- Understand system hardware/software to be able to identify problems and provide a remedy.
- Handle TAC/Engineering escalations as directed by the team lead or team manager.
Requirements
- Bachelor’s degree in computer science, business, or related area.
- 3+ years of relevant experience and proven ability as a software developer.
- Experience with the Microsoft development platform.
- Experience with .NET Framework.
- Professional experience with integration services including XML, SOAP, REST, TCP/IP, JavaScript, and HTML.
- Deep Understanding of application architecture.
- Familiarity in data modeling and architecture.
- Deep expertise and familiarity with the Pure Cloud development platform.
We offer an outstanding career development opportunity, a competitive salary along with full comprehensive benefits. We are looking for individuals with a team player attitude, strong drive for career growth and a passion for excellence in client support, delivery, and satisfaction.
API Lead Developer
Job Overview:
As an API developer for a very large client, you will be filling the role of a hands-on Azure API Developer. we are looking for someone who has the necessary technical expertise to build and maintain sustainable API Solutions to support identified needs and expectations from the client.
Delivery Responsibilities
- Implement an API architecture using Azure API Management, including security, API Gateway, Analytics, and API Services
- Design reusable assets, components, standards, frameworks, and processes to support and facilitate API and integration projects
- Conduct functional, regression, and load testing on API’s
- Gather requirements and defining the strategy for application integration
- Develop using the following types of Integration protocols/principles: SOAP and Web services stack, REST APIs, RESTful, RPC/RFC
- Analyze, design, and coordinate the development of major components of the APIs including hands on implementation, testing, review, build automation, and documentation
- Work with DevOps team to package release components to deploy into higher environment
Required Qualifications
- Expert Hands-on experience in the following:
- Technologies such as Spring Boot, Microservices, API Management & Gateway, Event Streaming, Cloud-Native Patterns, Observability & Performance optimizations
- Data modelling, Master and Operational Data Stores, Data ingestion & distribution patterns, ETL / ELT technologies, Relational and Non-Relational DB's, DB Optimization patterns
- At least 5+ years of experience with Azure APIM
- At least 8+ years’ experience in Azure SaaS and PaaS
- At least 8+ years’ experience in API Management including technologies such as Mulesoft and Apigee
- At least last 5 years in consulting with the latest implementation on Azure SaaS services
- At least 5+ years in MS SQL / MySQL development including data modeling, concurrency, stored procedure development and tuning
- Excellent communication skills with a demonstrated ability to engage, influence, and encourage partners and stakeholders to drive collaboration and alignment
- High degree of organization, individual initiative, results and solution oriented, and personal accountability and resiliency
- Should be a self-starter and team player, capable of working with a team of architects, co-developers, and business analysts
Preferred Qualifications:
- Ability to work as a collaborative team, mentoring and training the junior team members
- Working knowledge on building and working on/around data integration / engineering / Orchestration
- Position requires expert knowledge across multiple platforms, integration patterns, processes, data/domain models, and architectures.
- Candidates must demonstrate an understanding of the following disciplines: enterprise architecture, business architecture, information architecture, application architecture, and integration architecture.
- Ability to focus on business solutions and understand how to achieve them according to the given timeframes and resources.
- Recognized as an expert/thought leader. Anticipates and solves highly complex problems with a broad impact on a business area.
- Experience with Agile Methodology / Scaled Agile Framework (SAFe).
- Outstanding oral and written communication skills including formal presentations for all levels of management combined with strong collaboration/influencing.
Preferred Education/Skills:
- Prefer Master’s degree
- Bachelor’s Degree in Computer Science with a minimum of 12+ years relevant experience or equivalent.
JOB DESCRIPTION:. THE IDEAL CANDIDATE WILL:
• Ensure new features and subject areas are modelled to integrate with existing structures and provide a consistent view. Develop and maintain documentation of the data architecture, data flow and data models of the data warehouse appropriate for various audiences. Provide direction on adoption of Cloud technologies (Snowflake) and industry best practices in the field of data warehouse architecture and modelling.
• Providing technical leadership to large enterprise scale projects. You will also be responsible for preparing estimates and defining technical solutions to proposals (RFPs). This role requires a broad range of skills and the ability to step into different roles depending on the size and scope of the project Roles & Responsibilities.
ELIGIBILITY CRITERIA: Desired Experience/Skills:
• Must have total 5+ yrs. in IT and 2+ years' experience working as a snowflake Data Architect and 4+ years in Data warehouse, ETL, BI projects.
• Must have experience at least two end to end implementation of Snowflake cloud data warehouse and 3 end to end data warehouse implementations on-premise preferably on Oracle.
• Expertise in Snowflake – data modelling, ELT using Snowflake SQL, implementing complex stored Procedures and standard DWH and ETL concepts
• Expertise in Snowflake advanced concepts like setting up resource monitors, RBAC controls, virtual warehouse sizing, query performance tuning, Zero copy clone, time travel and understand how to use these features
• Expertise in deploying Snowflake features such as data sharing, events and lake-house patterns
• Hands-on experience with Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python
• Experience in Data Migration from RDBMS to Snowflake cloud data warehouse
• Deep understanding of relational as well as NoSQL data stores, methods and approaches (star and snowflake, dimensional modelling)
• Experience with data security and data access controls and design
• Experience with AWS or Azure data storage and management technologies such as S3 and ADLS
• Build processes supporting data transformation, data structures, metadata, dependency and workload management
• Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot
• Provide resolution to an extensive range of complicated data pipeline related problems, proactively and as issues surface
• Must have expertise in AWS or Azure Platform as a Service (PAAS)
• Certified Snowflake cloud data warehouse Architect (Desirable)
• Should be able to troubleshoot problems across infrastructure, platform and application domains.
• Must have experience of Agile development methodologies
• Strong written communication skills. Is effective and persuasive in both written and oral communication
Nice to have Skills/Qualifications:Bachelor's and/or master’s degree in computer science or equivalent experience.
• Strong communication, analytical and problem-solving skills with a high attention to detail.
About you:
• You are self-motivated, collaborative, eager to learn, and hands on
• You love trying out new apps, and find yourself coming up with ideas to improve them
• You stay ahead with all the latest trends and technologies
• You are particular about following industry best practices and have high standards regarding quality


- Lead multiple client projects in the organization.
- Define & build technical architecture for projects.
- Introduce & ensure right coding standards within the team and ensure that it is maintained in all the projects.
- Ensure the quality delivery of projects.
- Ensure applications are confirmed to security guidelines wherever required.
- Assist pre-sales/sales team for converting raw requirements from potential clients to functional solutions.
- Train fellow team members to impose the best practices available.
- Work on improving and managing processes within the team.
- Implement innovative ideas throughout the team to improve the overall efficiency and quality of the team.
- Ensure proper communication & collaboration within the team
Requirements
7+ Years of experience in developing large scale applications.
Solid Domain Knowledge and experience with various Design Patterns & Data Modelling (Associations, OOPs Concepts etc.)
Should have exposure to multiple backend technologies, and databases - both relational and NOSQL databases
Should be aware of latest conventions for APIs
Preferred hands-on experience with GraphQL as well as REST API
Must be well-aware of the latest technological advancements for relevant platforms.
Advanced Concepts of Databases - Views, Stored Procedures, Database Optimization - are a good to have.
Should have Research Oriented Approach
Solid at Logical thinking and Problem solving
Solid understanding of Coding Standards, Code Review processes and delivery of quality products
Experience with various Tools used in Development, Tests & Deployments.
Sound knowledge of DevOps and CI/CD Pipeline Tools
Solid experience with Git Workflow on Enterprise projects and larger teams
Should be good at documentation at project level and code level ; Should have good experience with Agile Methodology and process
Should have a good understanding of server side deployment, scalability, maintainability and handling server security problems.
Should have good understanding on Software UX
Proficient with communication and good at making software architectural judgments
Expected outcomes
- Growing the team and retaining talent, thus, creating an inspiring environment for the team members.
- Creating more leadership within the team along with mentoring and guiding new joiners and experienced developers.
- Creating growth plans for the team and preparing training guides for other team members.
- Refining processes in the team on a regular basis to ensure quality delivery of projects- such as coding standards, project collaboration, code review processes etc.
- Improving overall efficiency and team productivity by introducing new methodologies and ideas in the team.
- Working on R&D and employing innovative technologies in the company.
- Streamlining processes which will result in saving time and cost optimization
- Ensuring code review healthiness and shipping superior quality code
Benefits
- Unlimited learning and growth opportunities
- A collaborative and cheerful work environment
- Exceptional reward and recognition policy
- Outstanding compensation
- Flexible work hours
- Opportunity to make an impact as your work will directly contribute to our business strategy.
At Nickelfox, you have a chance to craft a career path as unique as you are and become the best version of YOU. You will be part of a team with a ‘no limits’ mindset in an inclusive, people-focused culture. And we’re counting on your unique perspective to help Nickelfox grow even faster.
Are you passionate about tech? Dedicated to learning? Come, join us to build an extraordinary experience for yourself and a dignified working world for all.
What makes Nickelfox a great place for you?
In Nickelfox, you’ll join a team whose passion for technology and understanding of business has driven the company to serve clients across 25+ countries in just five years. We partner with our customers to fuel their growth story and enable them to make the right decisions with our customized technology services and insights. All in all, we are passionate to see our customers win the day. This is the reason why 80% of our business comes from repeat clients.
Our mission is to provide dignified employment and an environment that recognizes the uniqueness of every individual and values their expertise, and contribution. We have a culture that encourages everyone to bring their authentic selves to work. Our people enjoy a collaborative work environment with exceptional training and career development. If you like working with a curious, youthful, high-performing team, Nickelfox is the place for you.