50+ SQL Jobs in Pune | SQL Job openings in Pune
Apply to 50+ SQL Jobs in Pune on CutShort.io. Explore the latest SQL Job opportunities across top companies like Google, Amazon & Adobe.


Role Overview
We are looking for a highly skilled Product Engineer to join our dynamic team. This is an exciting opportunity to work on innovative FinTech solutions and contribute to the future of global payments. If you're passionate about backend development, API design, and scalable architecture, we'd love to hear from you!
Key Responsibilities
- Design, develop, and maintain scalable, high-performance backend systems.
- Write clean, maintainable, and efficient code while following best practices.
- Build and optimize RESTful APIs and database queries.
- Collaborate with cross-functional teams to deliver 0 to 1 products.
- Ensure smooth CI/CD pipeline implementation and deployment automation.
- Contribute to open-source projects and stay updated with industry trends.
- Maintain a strong focus on security, performance, and reliability.
- Work with payment protocols and financial regulations to ensure compliance.
Required Skills & Qualifications
- ✅ 3+ years of professional software development experience.
- ✅ Proficiency in any backend language (with preference for Ruby on Rails).
- ✅ Strong foundation in architecture, design, and database optimization.
- ✅ Experience in building APIs and working with SQL/NoSQL databases.
- ✅ Familiarity with CI/CD practices and automation tools.
- ✅ Excellent problem-solving and analytical skills.
- ✅ Strong track record of open-source contributions (minimum 50 stars on GitHub).
- ✅ Passion for FinTech and payment systems.
- ✅ Strong communication skills and ability to work collaboratively in a team.
Nice to Have
- Prior experience in financial services or payment systems.
- Exposure to microservices architecture and cloud platforms.
- Knowledge of containerization tools like Docker & Kubernetes.
Key Responsibilities would include:
1. Design, develop, and maintain enterprise-level Java applications.
2. Collaborate with cross-functional teams to gather and analyze requirements, and implement solutions.
3. Develop & customize the application using HTML5, CSS, and jQuery to create dynamic and responsive user interfaces.
4. Integrate with relational databases (RDBMS) to manage and retrieve data efficiently.
5. Write clean, maintainable, and efficient code following best practices and coding standards.
6. Participate in code reviews, debugging, and testing to ensure high-quality deliverables.
7. Troubleshoot and resolve issues in existing applications and systems.
Qualification requirement -
1. 4 years of hands-on experience in Java / J2ee development, preferably with enterprise-level projects.
2. Spring Framework including - SOA, AoP and Spring security
3. Proficiency in web technologies including HTML5, CSS, jQuery, and JavaScript.
4. Experience with RESTful APIs and web services.
5. Knowledge of build tools like Maven or Gradle
6. Strong knowledge of relational databases (e.g., MySQL, PostgreSQL, Oracle) and experience with SQL.
7. Experience with version control systems like Git.
8. Understanding of software development lifecycle (SDLC)
9. Strong problem-solving skills and attention to details.


About Us
Seeking a talented .NET Developer to join our team and work with one of our key clients on the development of a cloud-based SaaS product.
What You’ll Do
- Collaborate closely with the client’s Product Team to brainstorm ideas, suggest product flows, and influence technical direction a
- Develop and maintain robust, scalable, and maintainable code following SOLID principles, TDD/BDD, and clean architecture stan
- Work with Azure, C#/.NET, and React to build full-stack features and cloud-based services
- Design and implement microservices using CQRS, DDD, and other modern architectural patterns
- Manage and interact with SQL Server and other relational/non-relational databases
- Conduct code reviews and evaluate pull requests from team members
- Mentor junior developers and contribute to a strong engineering culture
- Take part in sprint reviews and agile ceremonies
- Analyze legacy systems for refactoring, modernization, and platform evolution
What We’re Looking For
- Strong hands-on experience with .NET (C#) and Azure Cloud Services
- Working knowledge of React for frontend development
- Deep understanding of Domain-Driven Design (DDD), Onion Architecture, CQRS, and Microservices Architecture
- Expertise in SOLID, OOP, Clean Code, KISS, and DRY principles
- Familiarity with both relational (SQL Server) and non-relational databases
- Experience with TDD/BDD testing approaches and scalable, testable code design
- Strong communication and collaboration skills
- Passion for mentoring and uplifting fellow engineers
Nice to Have
- Experience with event-driven architecture
- Exposure to containerization (Docker, Kubernetes ) - Familiarity with DevOps pipelines and CI/CD on Azure

At least 5 years of experience in testing and developing automation tests.
A minimum of 3 years of experience writing tests in Python, with a preference for experience in designing automation frameworks.
Experience in developing automation for big data testing, including data ingestion, data processing, and data migration, is highly desirable.
Familiarity with Playwright or other browser application testing frameworks is a significant advantage.
Proficiency in object-oriented programming and principles is required.
Extensive knowledge of AWS services is essential.
Strong expertise in REST API testing and SQL is required.
A solid understanding of testing and development life cycle methodologies is necessary.
Knowledge of the financial industry and trading systems is a plus
Job Title: Sr. QA Engineer
Location: Pune, Banner
Mode - Hybrid
Major Responsibilities:
- Understand product requirements and design test plans/ test cases.
- Collaborate with developers for discussing story design/ test cases/code walkthrough etc.
- Design automation strategy for regression test cases.
- Execute tests and collaborate with developers in case of issues.
- Review unit test coverage/ enhance existing unit test coverage
- Automate integration/end-to-end tests using Junit/ Mockito /Selenium/Cypress
Requirements:
- Experience of web application testing/ test automation
- Good analytical skills
- Exposure to test design techniques
- Exposure to Agile Development methodology, Scrums
- Should be able to read and understand code.
- Review and understand unit test cases/ suggest additional unit-level coverage points.
- Exposure to multi-tier web application deployment/architecture (SpringBoot)
- Good exposure to SQL query language
- Exposure to Configuration management tool for code investigation - GitHub
- Exposure to Web Service / API testing
- Cucumber – use case-driven test automation
- System understanding, writing test cases from scratch, requirement analysis, thinking from a user perspective, test designing, and requirement analysis
In your role as Software Engineer/Lead, you will directly work with other developers, Product Owners, and Scrum Masters to evaluate and develop innovative solutions. The purpose of the role is to design, develop, test, and operate a complex set of applications or platforms in the IoT Cloud area.
The role involves the utilization of advanced tools and analytical methods for gathering facts to develop solution scenarios. The job holder needs to be able to execute quality code, review code, and collaborate with other developers.
We have an excellent mix of people, which we believe makes for a more vibrant, more innovative, and more productive team.
- A bachelor’s degree, or master’s degree in information technology, computer science, or other relevant education
- At least 5 years of experience as Software Engineer, in an enterprise context
- Experience in design, development and deployment of large-scale cloud-based applications and services
- Good knowledge in cloud (AWS) serverless application development, event driven architecture and SQL / No-SQL databases
- Experience with IoT products, backend services and design principles
- Good knowledge at least of one backend technology like node.js (JavaScript, TypeScript) or JVM (Java, Scala, Kotlin)
- Passionate about code quality, security and testing
- Microservice development experience with Java (Spring) is a plus
- Good command of English in both Oral & Written


What You’ll Do
- Collaborate closely with the client’s Product Team to brainstorm ideas, suggest product flows, and influence technical direction a
- Develop and maintain robust, scalable, and maintainable code following SOLID principles, TDD/BDD, and clean architecture stan
- Work with Azure, C#/.NET, and React to build full-stack features and cloud-based services
- Design and implement microservices using CQRS, DDD, and other modern architectural patterns
- Manage and interact with SQL Server and other relational/non-relational databases
- Conduct code reviews and evaluate pull requests from team members
- Mentor junior developers and contribute to a strong engineering culture
- Take part in sprint reviews and agile ceremonies
- Analyze legacy systems for refactoring, modernization, and platform evolution
What We’re Looking For
- Strong hands-on experience with .NET (C#) and Azure Cloud Services
- Working knowledge of React for frontend development
- Deep understanding of Domain-Driven Design (DDD), Onion Architecture, CQRS, and Microservices Architecture
- Expertise in SOLID, OOP, Clean Code, KISS, and DRY principles
- Familiarity with both relational (SQL Server) and non-relational databases
- Experience with TDD/BDD testing approaches and scalable, testable code design
- Strong communication and collaboration skills
- Passion for mentoring and uplifting fellow engineers
Nice to Have
- Experience with event-driven architecture
- Exposure to containerization (Docker, Kubernetes ) - Familiarity with DevOps pipelines and CI/CD on Azure

We are looking for a Senior Data Engineer with strong expertise in GCP, Databricks, and Airflow to design and implement a GCP Cloud Native Data Processing Framework. The ideal candidate will work on building scalable data pipelines and help migrate existing workloads to a modern framework.
- Shift: 2 PM 11 PM
- Work Mode: Hybrid (3 days a week) across Xebia locations
- Notice Period: Immediate joiners or those with a notice period of up to 30 days
Key Responsibilities:
- Design and implement a GCP Native Data Processing Framework leveraging Spark and GCP Cloud Services.
- Develop and maintain data pipelines using Databricks and Airflow for transforming Raw → Silver → Gold data layers.
- Ensure data integrity, consistency, and availability across all systems.
- Collaborate with data engineers, analysts, and stakeholders to optimize performance.
- Document standards and best practices for data engineering workflows.
Required Experience:
- 7-8 years of experience in data engineering, architecture, and pipeline development.
- Strong knowledge of GCP, Databricks, PySpark, and BigQuery.
- Experience with Orchestration tools like Airflow, Dagster, or GCP equivalents.
- Understanding of Data Lake table formats (Delta, Iceberg, etc.).
- Proficiency in Python for scripting and automation.
- Strong problem-solving skills and collaborative mindset.
⚠️ Please apply only if you have not applied recently or are not currently in the interview process for any open roles at Xebia.
Looking forward to your response!
Best regards,
Vijay S
Assistant Manager - TAG
Here is the Job Description -
Location -- Viman Nagar, Pune
Mode - 5 Days Working
Required Tech Skills:
● Strong at PySpark, Python
● Good understanding of Data Structure
● Good at SQL query/optimization
● Strong fundamentals of OOPs programming
● Good understanding of AWS Cloud, Big Data.
● Data Lake, AWS Glue, Athena, S3, Kinesis, SQL/NoSQL DB


JioTesseract, a digital arm of Reliance Industries, is India's leading and largest AR/VR organization with the mission to democratize mixed reality for India and the world. We make products at the cross of hardware, software, content and services with focus on making India the leader in spatial computing. We specialize in creating solutions in AR, VR and AI, with some of our notable products such as JioGlass, JioDive, 360 Streaming, Metaverse, AR/VR headsets for consumers and enterprise space.
Mon-fri role, In office, with excellent perks and benefits!
Position Overview
We are seeking a Software Architect to lead the design and development of high-performance robotics and AI software stacks utilizing NVIDIA technologies. This role will focus on defining scalable, modular, and efficient architectures for robot perception, planning, simulation, and embedded AI applications. You will collaborate with cross-functional teams to build next-generation autonomous systems 9
Key Responsibilities:
1. System Architecture & Design
● Define scalable software architectures for robotics perception, navigation, and AI-driven decision-making.
● Design modular and reusable frameworks that leverage NVIDIA’s Jetson, Isaac ROS, Omniverse, and CUDA ecosystems.
● Establish best practices for real-time computing, GPU acceleration, and edge AI inference.
2. Perception & AI Integration
● Architect sensor fusion pipelines using LIDAR, cameras, IMUs, and radar with DeepStream, TensorRT, and ROS2.
● Optimize computer vision, SLAM, and deep learning models for edge deployment on Jetson Orin and Xavier.
● Ensure efficient GPU-accelerated AI inference for real-time robotics applications.
3. Embedded & Real-Time Systems
● Design high-performance embedded software stacks for real-time robotic control and autonomy.
● Utilize NVIDIA CUDA, cuDNN, and TensorRT to accelerate AI model execution on Jetson platforms.
● Develop robust middleware frameworks to support real-time robotics applications in ROS2 and Isaac SDK.
4. Robotics Simulation & Digital Twins
● Define architectures for robotic simulation environments using NVIDIA Isaac Sim & Omniverse.
● Leverage synthetic data generation (Omniverse Replicator) for training AI models.
● Optimize sim-to-real transfer learning for AI-driven robotic behaviors.
5. Navigation & Motion Planning
● Architect GPU-accelerated motion planning and SLAM pipelines for autonomous robots.
● Optimize path planning, localization, and multi-agent coordination using Isaac ROS Navigation.
● Implement reinforcement learning-based policies using Isaac Gym.
6. Performance Optimization & Scalability
● Ensure low-latency AI inference and real-time execution of robotics applications.
● Optimize CUDA kernels and parallel processing pipelines for NVIDIA hardware.
● Develop benchmarking and profiling tools to measure software performance on edge AI devices.
Required Qualifications:
● Master’s or Ph.D. in Computer Science, Robotics, AI, or Embedded Systems.
● Extensive experience (7+ years) in software development, with at least 3-5 years focused on architecture and system design, especially for robotics or embedded systems.
● Expertise in CUDA, TensorRT, DeepStream, PyTorch, TensorFlow, and ROS2.
● Experience in NVIDIA Jetson platforms, Isaac SDK, and GPU-accelerated AI.
● Proficiency in programming languages such as C++, Python, or similar, with deep understanding of low-level and high-level design principles.
● Strong background in robotic perception, planning, and real-time control.
● Experience with cloud-edge AI deployment and scalable architectures.
Preferred Qualifications
● Hands-on experience with NVIDIA DRIVE, NVIDIA Omniverse, and Isaac Gym
● Knowledge of robot kinematics, control systems, and reinforcement learning
● Expertise in distributed computing, containerization (Docker), and cloud robotics
● Familiarity with automotive, industrial automation, or warehouse robotics
● Experience designing architectures for autonomous systems or multi-robot systems.
● Familiarity with cloud-based solutions, edge computing, or distributed computing for robotics
● Experience with microservices or service-oriented architecture (SOA)
● Knowledge of machine learning and AI integration within robotic systems
● Knowledge of testing on edge devices with HIL and simulations (Isaac Sim, Gazebo, V-REP etc.)
What You’ll Do:
* Establish formal data practice for the organisation.
* Build & operate scalable and robust data architectures.
* Create pipelines for the self-service introduction and usage of new data.
* Implement DataOps practices
* Design, Develop, and operate Data Pipelines which support Data scientists and machine learning Engineers.
* Build simple, highly reliable Data storage, ingestion, and transformation solutions which are easy to deploy and manage.
* Collaborate with various business stakeholders, software engineers, machine learning engineers, and analysts.
Who You Are:
* Experience in designing, developing and operating configurable Data pipelines serving high volume and velocity data.
* Experience working with public clouds like GCP/AWS.
* Good understanding of software engineering, DataOps, data architecture, Agile and DevOps methodologies.
* Experience building Data architectures that optimize performance and cost, whether the components are prepackaged or homegrown.
* Proficient with SQL, Java, Spring boot, Python or JVM-based language, Bash.
* Experience with any of Apache open source projects such as Spark, Druid, Beam, Airflow etc. and big data databases like BigQuery, Clickhouse, etc
* Good communication skills with the ability to collaborate with both technical and non-technical people.
* Ability to Think Big, take bets and innovate, Dive Deep, Bias for Action, Hire and Develop the Best, Learn and be Curious

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position Summary:
As Architect, you will be responsible for designing, implementing, and managing SailPoint IdentityNow (IIQ) solutions to ensure effective identity governance and access management across our enterprise. You will work closely with stakeholders to understand their requirements, develop solutions that align with business objectives, and oversee the deployment and integration of SailPoint technologies.
Key Responsibilities:
Architect and Design Solutions:
= Design and architect SailPoint IIQ solutions that meet business needs and align with IT strategy.
= Develop detailed technical designs, including integration points, workflows, and data models.
Implementation and Integration:
= Lead the implementation and configuration of SailPoint IIQ, including connectors, identity governance, and compliance features.
= Integrate SailPoint with various systems, applications, and directories (e.g., Active Directory, LDAP, databases).
Project Management
= Manage project timelines, resources, and deliverables to ensure successful deployment of SailPoint IIQ solutions.
= Coordinate with cross-functional teams to address project requirements, risks, and issues.
Customization and Development:
= Customize SailPoint IIQ functionalities, including developing custom connectors, workflows, and rules.
= Develop and maintain documentation related to architecture, configurations, and customizations.
Support and Troubleshooting:
= Provide ongoing support for SailPoint IIQ implementations, including troubleshooting and resolving technical issues.
= Conduct regular reviews and performance tuning to optimize the SailPoint environment.
Compliance and Best Practices:
= Ensure SailPoint IIQ implementations adhere to industry best practices, security policies, and regulatory requirements.
= Stay current with SailPoint updates and advancements, and recommend improvements and enhancements.
Collaboration and Training:
= Collaborate with business and IT stakeholders to understand requirements and translate them into technical solutions.
= Provide training and support to end-users and internal teams on SailPoint IIQ functionalities and best practices.
Education and Experience:
- Bachelor’s degree in computer science, Information Technology, or a related field.
- Minimum of 5 years of experience with identity and access management (IAM) solutions, with a strong focus on SailPoint IIQ.
- Proven experience in designing and implementing SailPoint IIQ solutions in complex environments.
Technical Skills:
- Expertise in SailPoint IIQ architecture, configuration, and customization.
- Strong knowledge of identity governance, compliance, and role-based access control (RBAC).
- Experience with integration of SailPoint with various systems and applications.
- Proficiency in Java, XML, SQL, and other relevant technologies.
Certification Preferred:
1 SailPoint IIQ Certification (e.g., SailPoint Certified Implementation Engineer).
2 Other relevant IAM or security certifications (e.g., CISSP, CISM).


About Data Axle:
Data Axle Inc. has been an industry leader in data, marketing solutions, sales and research for over 50 years in the USA. Data Axle now as an established strategic global centre of excellence in Pune. This centre delivers mission critical data services to its global customers powered by its proprietary cloud-based technology platform and by leveraging proprietary business & consumer databases.
Data Axle Pune is pleased to have achieved certification as a Great Place to Work!
Roles & Responsibilities:
We are looking for a Senior Data Scientist to join the Data Science Client Services team to continue our success of identifying high quality target audiences that generate profitable marketing return for our clients. We are looking for experienced data science, machine learning and MLOps practitioners to design, build and deploy impactful predictive marketing solutions that serve a wide range of verticals and clients. The right candidate will enjoy contributing to and learning from a highly talented team and working on a variety of projects.
We are looking for a Senior Data Scientist who will be responsible for:
- Ownership of design, implementation, and deployment of machine learning algorithms in a modern Python-based cloud architecture
- Design or enhance ML workflows for data ingestion, model design, model inference and scoring
- Oversight on team project execution and delivery
- Establish peer review guidelines for high quality coding to help develop junior team members’ skill set growth, cross-training, and team efficiencies
- Visualize and publish model performance results and insights to internal and external audiences
Qualifications:
- Masters in a relevant quantitative, applied field (Statistics, Econometrics, Computer Science, Mathematics, Engineering)
- Minimum of 5 years of work experience in the end-to-end lifecycle of ML model development and deployment into production within a cloud infrastructure (Databricks is highly preferred)
- Proven ability to manage the output of a small team in a fast-paced environment and to lead by example in the fulfilment of client requests
- Exhibit deep knowledge of core mathematical principles relating to data science and machine learning (ML Theory + Best Practices, Feature Engineering and Selection, Supervised and Unsupervised ML, A/B Testing, etc.)
- Proficiency in Python and SQL required; PySpark/Spark experience a plus
- Ability to conduct a productive peer review and proper code structure in Github
- Proven experience developing, testing, and deploying various ML algorithms (neural networks, XGBoost, Bayes, and the like)
- Working knowledge of modern CI/CD methods This position description is intended to describe the duties most frequently performed by an individual in this position.
It is not intended to be a complete list of assigned duties but to describe a position level.
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.


Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst


We’re looking for a Tech Lead with expertise in ReactJS (Next.js), backend technologies, and database management to join our dynamic team.
Key Responsibilities:
- Lead and mentor a team of 4-6 developers.
- Architect and deliver innovative, scalable solutions.
- Ensure seamless performance while handling large volumes of data without system slowdowns.
- Collaborate with cross-functional teams to meet business goals.
Required Expertise:
- Frontend: ReactJS (Next.js is a must).
- Backend: Experience in Node.js, Python, or Java.
- Databases: SQL (mandatory), MongoDB (nice to have).
- Caching & Messaging: Redis, Kafka, or Cassandra experience is a plus.
- Proven experience in system design and architecture.
- Cloud certification is a bonus.
We are seeking a highly skilled and experienced Power BI Lead / Architect to join our growing team. The ideal candidate will have a strong understanding of data warehousing, data modeling, and business intelligence best practices. This role will be responsible for leading the design, development, and implementation of complex Power BI solutions that provide actionable insights to key stakeholders across the organization.
Location - Pune (Hybrid 3 days)
Responsibilities:
Lead the design, development, and implementation of complex Power BI dashboards, reports, and visualizations.
Develop and maintain data models (star schema, snowflake schema) for optimal data analysis and reporting.
Perform data analysis, data cleansing, and data transformation using SQL and other ETL tools.
Collaborate with business stakeholders to understand their data needs and translate them into effective and insightful reports.
Develop and maintain data pipelines and ETL processes to ensure data accuracy and consistency.
Troubleshoot and resolve technical issues related to Power BI dashboards and reports.
Provide technical guidance and mentorship to junior team members.
Stay abreast of the latest trends and technologies in the Power BI ecosystem.
Ensure data security, governance, and compliance with industry best practices.
Contribute to the development and improvement of the organization's data and analytics strategy.
May lead and mentor a team of junior Power BI developers.
Qualifications:
8-12 years of experience in Business Intelligence and Data Analytics.
Proven expertise in Power BI development, including DAX, advanced data modeling techniques.
Strong SQL skills, including writing complex queries, stored procedures, and views.
Experience with ETL/ELT processes and tools.
Experience with data warehousing concepts and methodologies.
Excellent analytical, problem-solving, and communication skills.
Strong teamwork and collaboration skills.
Ability to work independently and proactively.
Bachelor's degree in Computer Science, Information Systems, or a related field preferred.
Experience: 4+ years.
Location: Vadodara & Pune
Skills Set- Snowflake, Power Bi, ETL, SQL, Data Pipelines
What you'll be doing:
- Develop, implement, and manage scalable Snowflake data warehouse solutions using advanced features such as materialized views, task automation, and clustering.
- Design and build real-time data pipelines from Kafka and other sources into Snowflake using Kafka Connect, Snowpipe, or custom solutions for streaming data ingestion.
- Create and optimize ETL/ELT workflows using tools like DBT, Airflow, or cloud-native solutions to ensure efficient data processing and transformation.
- Tune query performance, warehouse sizing, and pipeline efficiency by utilizing Snowflakes Query Profiling, Resource Monitors, and other diagnostic tools.
- Work closely with architects, data analysts, and data scientists to translate complex business requirements into scalable technical solutions.
- Enforce data governance and security standards, including data masking, encryption, and RBAC, to meet organizational compliance requirements.
- Continuously monitor data pipelines, address performance bottlenecks, and troubleshoot issues using monitoring frameworks such as Prometheus, Grafana, or Snowflake-native tools.
- Provide technical leadership, guidance, and code reviews for junior engineers, ensuring best practices in Snowflake and Kafka development are followed.
- Research emerging tools, frameworks, and methodologies in data engineering and integrate relevant technologies into the data stack.
What you need:
Basic Skills:
- 3+ years of hands-on experience with Snowflake data platform, including data modeling, performance tuning, and optimization.
- Strong experience with Apache Kafka for stream processing and real-time data integration.
- Proficiency in SQL and ETL/ELT processes.
- Solid understanding of cloud platforms such as AWS, Azure, or Google Cloud.
- Experience with scripting languages like Python, Shell, or similar for automation and data integration tasks.
- Familiarity with tools like dbt, Airflow, or similar orchestration platforms.
- Knowledge of data governance, security, and compliance best practices.
- Strong analytical and problem-solving skills with the ability to troubleshoot complex data issues.
- Ability to work in a collaborative team environment and communicate effectively with cross-functional teams
Responsibilities:
- Design, develop, and maintain Snowflake data warehouse solutions, leveraging advanced Snowflake features like clustering, partitioning, materialized views, and time travel to optimize performance, scalability, and data reliability.
- Architect and optimize ETL/ELT pipelines using tools such as Apache Airflow, DBT, or custom scripts, to ingest, transform, and load data into Snowflake from sources like Apache Kafka and other streaming/batch platforms.
- Work in collaboration with data architects, analysts, and data scientists to gather and translate complex business requirements into robust, scalable technical designs and implementations.
- Design and implement Apache Kafka-based real-time messaging systems to efficiently stream structured and semi-structured data into Snowflake, using Kafka Connect, KSQL, and Snow pipe for real-time ingestion.
- Monitor and resolve performance bottlenecks in queries, pipelines, and warehouse configurations using tools like Query Profile, Resource Monitors, and Task Performance Views.
- Implement automated data validation frameworks to ensure high-quality, reliable data throughout the ingestion and transformation lifecycle.
- Pipeline Monitoring and Optimization: Deploy and maintain pipeline monitoring solutions using Prometheus, Grafana, or cloud-native tools, ensuring efficient data flow, scalability, and cost-effective operations.
- Implement and enforce data governance policies, including role-based access control (RBAC), data masking, and auditing to meet compliance standards and safeguard sensitive information.
- Provide hands-on technical mentorship to junior data engineers, ensuring adherence to coding standards, design principles, and best practices in Snowflake, Kafka, and cloud data engineering.
- Stay current with advancements in Snowflake, Kafka, cloud services (AWS, Azure, GCP), and data engineering trends, and proactively apply new tools and methodologies to enhance the data platform.
About the company
KPMG International Limited, commonly known as KPMG, is one of the largest professional services networks in the world, recognized as one of the "Big Four" accounting firms alongside Deloitte, PricewaterhouseCoopers (PwC), and Ernst & Young (EY). KPMG provides a comprehensive range of professional services primarily focused on three core areas: Audit and Assurance, Tax Services, and Advisory Services. Their Audit and Assurance services include financial statement audits, regulatory audits, and other assurance services. The Tax Services cover various aspects such as corporate tax, indirect tax, international tax, and transfer pricing. Meanwhile, their Advisory Services encompass management consulting, risk consulting, deal advisory, and other related services.
Apply through this link- https://forms.gle/qmX9T7VrjySeWYa37
Job Description
Position: Data Engineer
Experience: Experience 5+ years of relevant experience
Location : WFO (3 days working) Pune – Kharadi, NCR – Gurgaon , Bangalore
Employment Type: contract for 3 months-Can be extended basis performance and future requirements
Skills Required:
• Proficiency in SQL, AWS, data integration tools like Airflow or equivalent. Knowledge on using tools like JIRA, GitHub, etc.
• Data Engineer who will be able to work on the data management activities and orchestration processes.
Job Description :
Job Title : Data Engineer
Location : Pune (Hybrid Work Model)
Experience Required : 4 to 8 Years
Role Overview :
We are seeking talented and driven Data Engineers to join our team in Pune. The ideal candidate will have a strong background in data engineering with expertise in Python, PySpark, and SQL. You will be responsible for designing, building, and maintaining scalable data pipelines and systems that empower our business intelligence and analytics initiatives.
Key Responsibilities:
- Develop, optimize, and maintain ETL pipelines and data workflows.
- Design and implement scalable data solutions using Python, PySpark, and SQL.
- Collaborate with cross-functional teams to gather and analyze data requirements.
- Ensure data quality, integrity, and security throughout the data lifecycle.
- Monitor and troubleshoot data pipelines to ensure reliability and performance.
- Work on hybrid data environments involving on-premise and cloud-based systems.
- Assist in the deployment and maintenance of big data solutions.
Required Skills and Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or related field.
- 4 to 8 Years of experience in Data Engineering or related roles.
- Proficiency in Python and PySpark for data processing and analysis.
- Strong SQL skills with experience in writing complex queries and optimizing performance.
- Familiarity with data pipeline tools and frameworks.
- Knowledge of cloud platforms such as AWS, Azure, or GCP is a plus.
- Excellent problem-solving and analytical skills.
- Strong communication and teamwork abilities.
Preferred Qualifications:
- Experience with big data technologies like Hadoop, Hive, or Spark.
- Familiarity with data visualization tools and techniques.
- Knowledge of CI/CD pipelines and DevOps practices in a data engineering context.
Work Model:
- This position follows a hybrid work model, with candidates expected to work from the Pune office as per business needs.
Why Join Us?
- Opportunity to work with cutting-edge technologies.
- Collaborative and innovative work environment.
- Competitive compensation and benefits.
- Clear career progression and growth opportunities.

- 8 - 12 years of professional experience in .Net Framework 2.0/3.5/4.0/4.5 (C#, ASP.Net) and ADO.Net. Strong knowledge of software development, debugging and deployment tools Knowledge and experience of SQL Server (Any version).
- Knowledge of Versioning tools. Strong experience working with HTML, CSS, JavaScript, and jQuery.
- Should have Good Problem Solving and Analytical Skills Ability to work independently and as part of a team Ability to handle individual projects as well as team.
- Understanding of MVC, WCF, WPF and any mobile technology – beneficial Ability to learn new technology in a short period.
- BizTalk Development:
- Design, develop, and deploy BizTalk solutions for integrating business processes.
- Build custom BizTalk maps, orchestrations, pipelines, and adapters to meet integration requirements.
- Troubleshoot and resolve issues with BizTalk applications, ensuring minimal downtime.
- Implement best practices for BizTalk development and integration.
- Create and maintain documentation related to BizTalk applications and processes.
- SQL Development & Management:
- Develop, optimize, and maintain SQL Server databases, including stored procedures, queries, and triggers.
- Write efficient SQL queries to extract, transform, and load data.
- Perform database tuning and optimization for performance improvements.
- Ensure database security, integrity, and backup.
- Conduct routine maintenance tasks such as patching and updates for SQL databases.
- Integration & Collaboration:
- Collaborate with business analysts and other developers to understand business requirements and integrate them into solutions.
- Participate in design and code reviews, ensuring quality and standards are adhered to.
- Assist in the deployment of BizTalk solutions and SQL database updates to production environments.
- Support Existing Systems:
- Provide ongoing support and maintenance for existing BizTalk applications, SQL databases, and integration processes.
- Troubleshoot, monitor, and optimize existing BizTalk integration workflows, databases, and related services.
- Ensure smooth operations by resolving issues and improving the performance of legacy systems.
- Monitoring & Support:
- Monitor the performance of BizTalk applications and SQL databases to ensure optimal performance.
- Provide ongoing support for BizTalk integrations and SQL database issues.
- Troubleshoot and resolve technical issues related to integrations and SQL queries.
- Provide 2nd/3rd line support for any issues that arise in the production environment.
Required Skills & Qualifications:
- Technical Skills:
- Strong experience with BizTalk Server (preferably BizTalk 2016 or later).
- Expertise in creating and managing BizTalk orchestrations, maps, and pipelines.
- Proficient in SQL Server (SQL Server 2012/2014/2016/2017/2019).
- Experience with T-SQL, stored procedures, triggers, views, and indexes.
- Familiarity with SQL Server Management Studio (SSMS) and SQL Server Integration Services (SSIS).
- Knowledge of web services (SOAP, REST) and messaging formats (XML, JSON).
- Experience with BizTalk adapters and message brokers.
- Analytical and Troubleshooting Skills:
- Strong problem-solving skills with the ability to debug and troubleshoot BizTalk and SQL issues.
- Ability to quickly understand new systems and integrations and provide efficient solutions.
- 4-8 years of experience in Functional testing with good foundation in technical expertise
- Experience in the Capital Markets domain is MUST
- Exposure to API testing tools like SoapUI and Postman
- Well versed with SQL
- Hands on experience in debugging issues using Unix commands
- Basic understanding of XML and JSON structures
- Knowledge of FitNesse is good to have
- Should be early joinee.
- Bachelor's degree required, or higher education level, or foreign equivalent, preferably in area wit
- At least 5 years experience in Duck Creek Data Insights as Technical Architect/Senior Developer.
- Strong Technical knowledge on SQL databases, MSBI.
- Should have strong hands-on knowledge on Duck Creek Insight product, SQL Server/DB level configuration, T-SQL, XSL/XSLT, MSBI etc
- Well versed with Duck Creek Extract Mapper Architecture
- Strong understanding of Data Modelling, Data Warehousing, Data Marts, Business Intelligence with ability to solve business problems
- Strong understanding of ETL and EDW toolsets on the Duck Creek Data Insights
- Strong knowledge on Duck Creek Insight product overall architecture flow, Data hub, Extract mapper etc
- Understanding of data related to business application areas policy, billing, and claims business solutions
- Minimum 4 to 7 year working experience on Duck Creek Insights product
- Strong Technical knowledge on SQL databases, MSBI
- Preferable having experience in Insurance domain
- Preferable experience in Duck Creek Data Insights
- Experience specific to Duck Creek would be an added advantage
- Strong knowledge of database structure systems and data mining
- Excellent organisational and analytical abilities
- Outstanding problem solver

We're seeking an experienced Backend Software Engineer to join our team.
As a backend engineer, you will be responsible for designing, developing, and deploying scalable backends for the products we build at NonStop.
This includes APIs, databases, and server-side logic.
Responsibilities:
- Design, develop, and deploy backend systems, including APIs, databases, and server-side logic
- Write clean, efficient, and well-documented code that adheres to industry standards and best practices
- Participate in code reviews and contribute to the improvement of the codebase
- Debug and resolve issues in the existing codebase
- Develop and execute unit tests to ensure high code quality
- Work with DevOps engineers to ensure seamless deployment of software changes
- Monitor application performance, identify bottlenecks, and optimize systems for better scalability and efficiency
- Stay up-to-date with industry trends and emerging technologies; advocate for best practices and new ideas within the team
- Collaborate with cross-functional teams to identify and prioritize project requirements
Requirements:
- At least 2+ years of experience building scalable and reliable backend systems
- Strong proficiency in either of the programming languages such as Python, Node.js, Golang, RoR
- Experience with either of the frameworks such as Django, Express, gRPC
- Knowledge of database systems such as MySQL, PostgreSQL, MongoDB, Cassandra, or Redis
- Familiarity with containerization technologies such as Docker and Kubernetes
- Understanding of software development methodologies such as Agile and Scrum
- Ability to demonstrate flexibility wrt picking a new technology stack and ramping up on the same fairly quickly
- Bachelor's/Master's degree in Computer Science or related field
- Strong problem-solving skills and ability to collaborate effectively with cross-functional teams
- Good written and verbal communication skills in English

Job Description
Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.
Role & Responsibilities
- Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
- LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
- Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
- Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
- Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
- Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.
Preferred Candidate Profile
- Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
- Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
- Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
- AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
- Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
- MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
- Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
- Education: Degree in Computer Science, Data Engineering, or related field.
Perks and Benefits
- Competitive Compensation: INR 20L to 30L per year.
- Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.

- Design and Build Advanced Applications for the Android Platform
- Collaborate with Cross-Functional Teams to Define, Design and Ship New Features
- Troubleshoot and Fix Bugs in New and Existing Applications
- Continuously Discover, Evaluate and Implement New Development Tools
- Work With Outside Data Sources and APIs
- Knowledge of Android SDK, Java programming, Kotlin, Jetpack Compose, Realm
- Version Control, Clean Architecture
Job Description:
- 3+ years of experience in Functional testing with good foundation in technical expertise
- Experience in Capital Markets/Investment Banking domain is MUST
- Exposure to API testing tools like SoapUI and Postman
- Well versed with SQL
- Hands on experience in debugging issues using Unix commands
- Basic understanding of XML and JSON structures
- Knowledge of FitNesse is good to have
Location:
Pune/Mumbai
About Wissen Technology:
· The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015.
· Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
· Our workforce consists of 550+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world.
· Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments.
· Globally present with offices US, India, UK, Australia, Mexico, and Canada.
· We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
· Wissen Technology has been certified as a Great Place to Work®.
· Wissen Technology has been voted as the Top 20 AI/ML vendor by CIO Insider in 2020.
· Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies.
· We have served clients across sectors like Banking, Telecom, Healthcare, Manufacturing, and Energy. They include Morgan Stanley, MSCI, StateStreet, Flipkart, Swiggy, Trafigura, GE to name a few.
Website : www.wissen.com

Who are we looking for?
We are looking for a Senior Data Scientist, who will design and develop data-driven solutions using state-of-the-art methods. You should be someone with strong and proven experience in working on data-driven solutions. If you feel you’re enthusiastic about transforming business requirements into insightful data-driven solutions, you are welcome to join our fast-growing team to unlock your best potential.
Job Summary
- Supporting company mission by understanding complex business problems through data-driven solutions.
- Designing and developing machine learning pipelines in Python and deploying them in AWS/GCP, ...
- Developing end-to-end ML production-ready solutions and visualizations.
- Analyse large sets of time-series industrial data from various sources, such as production systems, sensors, and databases to draw actionable insights and present them via custom dashboards.
- Communicating complex technical concepts and findings to non-technical stakeholders of the projects
- Implementing the prototypes using suitable statistical tools and artificial intelligence algorithms.
- Preparing high-quality research papers and participating in conferences to present and report experimental results and research findings.
- Carrying out research collaborating with internal and external teams and facilitating review of ML systems for innovative ideas to prototype new models.
Qualification and experience
- B.Tech/Masters/Ph.D. in computer science, electrical engineering, mathematics, data science, and related fields.
- 5+ years of professional experience in the field of machine learning, and data science.
- Experience with large-scale Time-series data-based production code development is a plus.
Skills and competencies
- Familiarity with Docker, and ML Libraries like PyTorch, sklearn, pandas, SQL, and Git is a must.
- Ability to work on multiple projects. Must have strong design and implementation skills.
- Ability to conduct research based on complex business problems.
- Strong presentation skills and the ability to collaborate in a multi-disciplinary team.
- Must have programming experience in Python.
- Excellent English communication skills, both written and verbal.
Benefits and Perks
- Culture of innovation, creativity, learning, and even failure, we believe in bringing out the best in you.
- Progressive leave policy for effective work-life balance.
- Get mentored by highly qualified internal resource groups and opportunity to avail industry-driven mentorship program, as we believe in empowering people.
- Multicultural peer groups and supportive workplace policies.
- Work from beaches, hills, mountains, and many more with the yearly workcation program; we believe in mixing elements of vacation and work.
Hiring Process
- Call with Talent Acquisition Team: After application screening, a first-level screening with the talent acquisition team to understand the candidate's goals and alignment with the job requirements.
- First Round: Technical round 1 to gauge your domain knowledge and functional expertise.
- Second Round: In-depth technical round and discussion about the departmental goals, your role, and expectations.
- Final HR Round: Culture fit round and compensation discussions.
- Offer: Congratulations you made it!
If this position sparked your interest, apply now to initiate the screening process.

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes Tvarit one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated Data Engineer from the manufacturing Industry with over two years of experience to join our team. As a data engineer, you will be responsible for designing, building, and maintaining the infrastructure required for the collection, storage, processing, and analysis of large and complex data sets. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required
- Experience in the manufacturing industry (metal industry is a plus)
- 2+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.

TVARIT GmbH develops and delivers solutions in the field of artificial intelligence (AI) for the Manufacturing, automotive, and process industries. With its software products, TVARIT makes it possible for its customers to make intelligent and well-founded decisions, e.g., in forward-looking Maintenance, increasing the OEE and predictive quality. We have renowned reference customers, competent technology, a good research team from renowned Universities, and the award of a renowned AI prize (e.g., EU Horizon 2020) which makes TVARIT one of the most innovative AI companies in Germany and Europe.
We are looking for a self-motivated person with a positive "can-do" attitude and excellent oral and written communication skills in English.
We are seeking a skilled and motivated senior Data Engineer from the manufacturing Industry with over four years of experience to join our team. The Senior Data Engineer will oversee the department’s data infrastructure, including developing a data model, integrating large amounts of data from different systems, building & enhancing a data lake-house & subsequent analytics environment, and writing scripts to facilitate data analysis. The ideal candidate will have a strong foundation in ETL pipelines and Python, with additional experience in Azure and Terraform being a plus. This role requires a proactive individual who can contribute to our data infrastructure and support our analytics and data science initiatives.
Skills Required:
- Experience in the manufacturing industry (metal industry is a plus)
- 4+ years of experience as a Data Engineer
- Experience in data cleaning & structuring and data manipulation
- Architect and optimize complex data pipelines, leading the design and implementation of scalable data infrastructure, and ensuring data quality and reliability at scale
- ETL Pipelines: Proven experience in designing, building, and maintaining ETL pipelines.
- Python: Strong proficiency in Python programming for data manipulation, transformation, and automation.
- Experience in SQL and data structures
- Knowledge in big data technologies such as Spark, Flink, Hadoop, Apache, and NoSQL databases.
- Knowledge of cloud technologies (at least one) such as AWS, Azure, and Google Cloud Platform.
- Proficient in data management and data governance
- Strong analytical experience & skills that can extract actionable insights from raw data to help improve the business.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
Nice To Have:
- Azure: Experience with Azure data services (e.g., Azure Data Factory, Azure Databricks, Azure SQL Database).
- Terraform: Knowledge of Terraform for infrastructure as code (IaC) to manage cloud.
- Bachelor’s degree in computer science, Information Technology, Engineering, or a related field from top-tier Indian Institutes of Information Technology (IIITs).
- Benefits And Perks
- A culture that fosters innovation, creativity, continuous learning, and resilience
- Progressive leave policy promoting work-life balance
- Mentorship opportunities with highly qualified internal resources and industry-driven programs
- Multicultural peer groups and supportive workplace policies
- Annual workcation program allowing you to work from various scenic locations
- Experience the unique environment of a dynamic start-up
Why should you join TVARIT ?
Working at TVARIT, a deep-tech German IT startup, offers a unique blend of innovation, collaboration, and growth opportunities. We seek individuals eager to adapt and thrive in a rapidly evolving environment.
If this opportunity excites you and aligns with your career aspirations, we encourage you to apply today!

Greetings , Wissen Technology is Hiring for the position of Data Engineer
Please find the Job Description for your Reference:
JD
- Design, develop, and maintain data pipelines on AWS EMR (Elastic MapReduce) to support data processing and analytics.
- Implement data ingestion processes from various sources including APIs, databases, and flat files.
- Optimize and tune big data workflows for performance and scalability.
- Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions.
- Manage and monitor EMR clusters, ensuring high availability and reliability.
- Develop ETL (Extract, Transform, Load) processes to cleanse, transform, and store data in data lakes and data warehouses.
- Implement data security best practices to ensure data is protected and compliant with relevant regulations.
- Create and maintain technical documentation related to data pipelines, workflows, and infrastructure.
- Troubleshoot and resolve issues related to data processing and EMR cluster performance.
Qualifications:
- Bachelor’s degree in Computer Science, Information Technology, or a related field.
- 5+ years of experience in data engineering, with a focus on big data technologies.
- Strong experience with AWS services, particularly EMR, S3, Redshift, Lambda, and Glue.
- Proficiency in programming languages such as Python, Java, or Scala.
- Experience with big data frameworks and tools such as Hadoop, Spark, Hive, and Pig.
- Solid understanding of data modeling, ETL processes, and data warehousing concepts.
- Experience with SQL and NoSQL databases.
- Familiarity with CI/CD pipelines and version control systems (e.g., Git).
- Strong problem-solving skills and the ability to work independently and collaboratively in a team environment

Sr. Data Engineer (Data Warehouse-Snowflake)
Experience: 5+yrs
Location: Pune (Hybrid)
As a Senior Data engineer with Snowflake expertise you are a subject matter expert who is curious and an innovative thinker to mentor young professionals. You are a key person to convert Vision and Data Strategy for Data solutions and deliver them. With your knowledge you will help create data-driven thinking within the organization, not just within Data teams, but also in the wider stakeholder community.
Skills Preferred
- Advanced written, verbal, and analytic skills, and demonstrated ability to influence and facilitate sustained change. Ability to convey information clearly and concisely to all levels of staff and management about programs, services, best practices, strategies, and organizational mission and values.
- Proven ability to focus on priorities, strategies, and vision.
- Very Good understanding in Data Foundation initiatives, like Data Modelling, Data Quality Management, Data Governance, Data Maturity Assessments and Data Strategy in support of the key business stakeholders.
- Actively deliver the roll-out and embedding of Data Foundation initiatives in support of the key business programs advising on the technology and using leading market standard tools.
- Coordinate the change management process, incident management and problem management process.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Drive implementation efficiency and effectiveness across the pilots and future projects to minimize cost, increase speed of implementation and maximize value delivery
Knowledge Preferred
- Extensive knowledge and hands on experience with Snowflake and its different components like User/Group, Data Store/ Warehouse management, External Stage/table, working with semi structured data, Snowpipe etc.
- Implement and manage CI/CD for migrating and deploying codes to higher environments with Snowflake codes.
- Proven experience with Snowflake Access control and authentication, data security, data sharing, working with VS Code extension for snowflake, replication, and failover, optimizing SQL, analytical ability to troubleshoot and debug on development and production issues quickly is key for success in this role.
- Proven technology champion in working with relational, Data warehouses databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Highly Experienced in building and optimizing complex queries. Good with manipulating, processing, and extracting value from large, disconnected datasets.
- Your experience in handling big data sets and big data technologies will be an asset.
- Proven champion with in-depth knowledge of any one of the scripting languages: Python, SQL, Pyspark.
Primary responsibilities
- You will be an asset in our team bringing deep technical skills and capabilities to become a key part of projects defining the data journey in our company, keen to engage, network and innovate in collaboration with company wide teams.
- Collaborate with the data and analytics team to develop and maintain a data model and data governance infrastructure using a range of different storage technologies that enables optimal data storage and sharing using advanced methods.
- Support the development of processes and standards for data mining, data modeling and data protection.
- Design and implement continuous process improvements for automating manual processes and optimizing data delivery.
- Assess and report on the unique data needs of key stakeholders and troubleshoot any data-related technical issues through to resolution.
- Work to improve data models that support business intelligence tools, improve data accessibility and foster data-driven decision making.
- Ensure traceability of requirements from Data through testing and scope changes, to training and transition.
- Manage and lead technical design and development activities for implementation of large-scale data solutions in Snowflake to support multiple use cases (transformation, reporting and analytics, data monetization, etc.).
- Translate advanced business data, integration and analytics problems into technical approaches that yield actionable recommendations, across multiple, diverse domains; communicate results and educate others through design and build of insightful presentations.
- Exhibit strong knowledge of the Snowflake ecosystem and can clearly articulate the value proposition of cloud modernization/transformation to a wide range of stakeholders.
Relevant work experience
Bachelors in a Science, Technology, Engineering, Mathematics or Computer Science discipline or equivalent with 7+ Years of experience in enterprise-wide data warehousing, governance, policies, procedures, and implementation.
Aptitude for working with data, interpreting results, business intelligence and analytic best practices.
Business understanding
Good knowledge and understanding of Consumer and industrial products sector and IoT.
Good functional understanding of solutions supporting business processes.
Skill Must have
- Snowflake 5+ years
- Overall different Data warehousing techs 5+ years
- SQL 5+ years
- Data warehouse designing experience 3+ years
- Experience with cloud and on-prem hybrid models in data architecture
- Knowledge of Data Governance and strong understanding of data lineage and data quality
- Programming & Scripting: Python, Pyspark
- Database technologies such as Traditional RDBMS (MS SQL Server, Oracle, MySQL, PostgreSQL)
Nice to have
- Demonstrated experience in modern enterprise data integration platforms such as Informatica
- AWS cloud services: S3, Lambda, Glue and Kinesis and API Gateway, EC2, EMR, RDS, Redshift and Kinesis
- Good understanding of Data Architecture approaches
- Experience in designing and building streaming data ingestion, analysis and processing pipelines using Kafka, Kafka Streams, Spark Streaming, Stream sets and similar cloud native technologies.
- Experience with implementation of operations concerns for a data platform such as monitoring, security, and scalability
- Experience working in DevOps, Agile, Scrum, Continuous Delivery and/or Rapid Application Development environments
- Building mock and proof-of-concepts across different capabilities/tool sets exposure
- Experience working with structured, semi-structured, and unstructured data, extracting information, and identifying linkages across disparate data sets

-Design and maintain efficient database solutions using RDBMS.
-Write complex SQL queries for data extraction and manipulation.
-Implement and optimize AWS services for scalable application deployment.
-Develop server-side logic using PHP and integrate front-end elements with JavaScript.
-Collaborate with teams to design, develop, and deploy web applications.
3-6 years of experience in Functional testing with good foundation in technical expertise.
Experience in Capital Markets domain is MUST.
Exposure to API testing tools like SoapUI and Postman.
Well versed with SQL
Hands on experience in debugging issues using Unix commands.
Basic understanding of XML and JSON structures
Knowledge of Finesse is good to have
What You’ll Be Doing:
- Subject Matter Expert on implementing APEX code and workflow code end-to-end in Salesforce, identifying dependencies in support of delivery, release, and change management.
- Expert in Salesforce data structures, object structures and partners of extending these things to provide highly customised data storage.
- Partner with all cross-functional teams to determine Salesforce CRM needs.
- Develop customised solutions within the Salesforce platform, including design, implementation, quality control, end-to-end testing plans in multiple environments (dev/test/stage/prod), troubleshooting and bug fixing.
- Own stability of the applications to Engineering standards including, incident management, monitoring, alerting and incident resolution for all aspects of the application.
- Create timelines, set expectations with RevOps and other cross-functional stakeholders.
- Collaborate with Engineering to ensure changes to event implementations and integrations are gracefully handled in the application.
- Maintain the security and data integrity of the application software.
- Research, Diagnose, and Monitor performance bottlenecks.
- Write documentation and provide technical training for cross-functional teams.
- Enjoy working with remote teams and people that have strong opinions
- Excellent verbal and written communication skills
- Excellent leadership and collaboration skills and the ability to lead large projects autonomously.
- Exhibits a high level of initiative and integrity and is empathetic to the needs of individuals across the organisation
- Enjoy working with a diverse group of people who have strong opinions.
- Comfortable not knowing answers, but resourceful and able to resolve issues.
- Strong problem solving and critical thinking skills.
- Strong project management, presentation and public speaking skills.
- Self-starter who’s comfortable with ambiguity, asking questions, and adept at shifting priorities
- Strong customer service skills with proven service mentality
- Strong in documenting business processes and communicating system requirements
Desired Traits, Qualifications, Education and Experience Equivalency
- Be able to make decisions, meet targets and work under pressure.
- Ability to set up, facilitate, and lead service and process improvement sessions with a range of business stakeholders.
- Ability to present to groups of mixed technical understanding.
- Adept at creating visuals to tell a story.
- Ability to build strong trust based relationships.
- Adept at shifting priorities while maintaining a high degree of organisation and control.
- Ability to manage multiple tasks and projects simultaneously.
- Ability to recommend actionable insights from projects and lead projects autonomously.
- Ability to travel to remote sites. (less than 25%).
- Demonstrated ability to work with geographically dispersed teams.
- Ability to learn and understand new ways of doing things.
- Ability to drive standards and best practices for a team or organisation.
- Ability to exercise good judgment within broadly defined practices and policies.
- Ability to lead a team towards high quality decisions when they have differing perspectives and ideas.
- Ability to provide business context for engineers as well as highlight technical challenges for non-engineers.
- Excellent decision-making skills and the ability to work in a collaborative environment, a team player.
Traits of a successful partnership:
- You have a passionate commitment to people and deep empathy for how supporting individuals leads to a stronger company culture.
- You’re Collaborative – It is expected that you will partner with various departments to ensure organisational needs are met and to develop strategic programs.
- You will serve as an Advocate – In the role, you will solicit and listen to concerns, and take an active role in resolving problems.
Preferred Experience/ Minimum Qualifications
- Graduate in Computer Science or a related field, or Professional Qualification/Salesforce Certification
- 4+ years of Salesforce developer experience
- 4 years experience in application development and/or software engineering.
- 2 years of proven continuous improvement analytical experience from a similar role, including project management and business analysis with an excellent understanding of continuous improvement concepts.
- Advanced knowledge of Salesforce CRM platform
- Proficiency with Salesforce Lightning Design System and Salesforce development lifecycle
- Demonstrated proficiency in SQL, Apex, LWC, Java and VisualForce
- Experience with Salesforce Sales, Service and Marketing clouds
- Experience developing customer-facing interfaces, including reports and dashboards, in Salesforce or other systems.
- Strong systems knowledge with the ability to effectively utilise DevOps tools such as Metadata API, GIT, Visual Studio Code, Jenkins, Jira, Confluence, etc.
- Strong understanding of Product development lifecycles.
- Experience with leading and coordinating cross-functional initiatives, conducting interviews and performing analyses to create business cases for projects.
- Experience performing live training sessions for internal and external stakeholders during system and process implementation.
- Must have strong communication skills and possess the ability to interact effectively with co-workers.
- Must have strong leadership skills.
- Additional Salesforce Certifications e.g. Certified Salesforce Administrator, Certified Salesforce Platform App Builder, Platform Developer II, JavaScript Developer I are highly desirable.
- Salesforce DevOps experience is highly desirable.
- Salesforce Developer Certifications will be given preference.
http://www.fictiv.com/" target="_blank">About Fictiv
Our Digital Manufacturing Ecosystem is transforming how the next rockets, self-driving cars, and life-saving robots are designed, developed and delivered to customers around the world.
This transformation is made possible through our technology-backed platform, our global network of manufacturing partners, and our people with deep expertise in hardware and software development.
We’re actively seeking potential teammates who can bring diverse perspectives and experience to our culture and company. We believe inclusion is the best way to create a strong, empathetic team. Our belief is that the best team is born from an environment that emphasizes respect, honesty, collaboration, and growth.
We encourage applications from members of underrepresented groups, including but not limited to women, members of the LGBTQ community, people of color, people with disabilities, and veterans.
Apply for this Job
What’s in it for you?
Opportunity To Unlock Your Creativity
Think of all the times you were held back from trying new ideas because you were boxed in by bureaucratic legacy processes or old school tactics. Having a growth mindset is deeply ingrained into our company culture since day 1 so Fictiv is an environment where you have the creative liberty and support of the team to try big bold ideas to achieve our sales and customer goals.
Opportunity To Grow Your Career
There are plenty of sales jobs out there. The question is whether any of them will help you grow in your career? Will you be challenged by teammates to achieve your potential? Or are they roles that will ask you to do more of what you've already mastered. At Fictiv, you'll be surrounded by supportive teammates who will push you to be your best through their curiosity and passion.
Impact in this Role
The Business Applications team performs a critical function for Fictiv by managing software that is part of the framework used to conduct day-to-day business. This team writes code to provide customised application configuration and data structures, customise workflows, implement monitoring and alerting, secure and control access, and integrate business software with Fictiv platform. Functional areas supported include: Operations, Finance, Sales, Marketing, Engineering, Product, Architecture, and Customer Support. The Business Applications team partners closely with cross functional stakeholders to ensure that business systems software is properly secured and has managed change control to meet each of their needs.
This team sets the stage for ensuring Fictiv's business is delivering on KPIs and goals. This team provides inputs for Fictiv’s strategic decision making.
The Business Applications team implements business software across Fictiv’s departments.
As the Sr. Salesforce Analyst (Sr. Salesforce Application Developer, Specialist) you will partner with the RevOps team, to focus on changes and improvements to Salesforce functionality in support of business needs. You will work with the RevOps core team supporting Salesforce Sales and must be a strategic partner in determining best practices and efficiency gains as it relates to process improvements.
You will work with the Lead Salesforce Analyst to design and implement solutions that meet the technical requirements and business requires outlined by RevOps. You will also analyse project objectives, create customer workflows, and troubleshoot errors. This role requires extensive experience working with Salesforce CRM platforms, application development skills, and the ability to solve complex software problems.
You will be responsible for understanding requirements, defining design, working with other cross-functional teams to create implementation plans, and providing thought leadership for all solutions to meet and exceed RevOps expectations. You will write APEX code and any supporting code required to implement solutions. You will own the stability, security, data accuracy, uptime and issue resolution in Salesforce. You will provide testing plans, unit testing and documentation for all solutions and develop strong cross-functional relations with Product, Engineering and Infrastructure. You will be accountable for following all Fictiv’s Engineering guidelines.
The Above Mentioned budget is for
Experience with QE for distributed, highly scalable systems • Good understanding of OOPS concepts and strong programming skills in Java, Groovy or JavaScript • Hands on experience of working with at least one of GUI based test automation tools for desktop and/or mobile automation. Experience on multiple tools will be added advantage • Proficient in writing SQL queries • Familiarity with process of test automation tool selection & test approach • Experience in designing and development of automation framework and creation of scripts using best industry practices such as Page object model • Integrate test suites into the test management system and custom test harness • Familiar with implementation of design patterns, modularization, and user libraries for framework creation • Can mentor team as well as has short learning curve for new technology • Understands all aspects of Quality Engineering • Understanding of SOAP and REST principles • Thorough understanding of microservices architecture • In-depth hands-on experience of working with at least one API testing tool like RestAssured, SOAP UI, NodeJS • Hands-on experience working with Postman or similar tool • Hands-on experience in parsing complex JSON & XML and data validation using serialization techniques like POJO classes or similar • Hands-on experience in performing Request and Response Schema validation, Response codes and exceptions • Good Understanding of BDD, TDD methodologies and tools like Cucumber, TestNG, Junit or similar. • Experience in defining API E2E testing strategy, designing and development of API automation framework • Working experience on build tools Maven / Gradle, Git etc. • Experience in creating test pipeline – CI/CD
Position: Sr SDET
Experience: 5 years
Location: Pune (Amar tech park)
Mode: 5 days a week from office
What’s the role?
We are looking for a Senior SDET to contribute to the design and building of our software offerings. Our engineering team works with .NET in an Agile environment. We use Azure DevOps Pipelines and Release. The definition of 'DONE' includes writing automated tests so our full regression on releases will effortless. We strive to do thing right and not just band-aid the problems. Our management is engaged and looking for feedback on how we can become better, iteratively.
You will have the opportunity to…
- Participate in story refinement sessions to ensure the details and dependencies are well
- defined & understood with considerations for testing
- Collaborate with Product Owner and Developers as a team to deliver quality
- Writing and maintaining tests cases, executing, and perform ad-hoc testing with the end-user
- experience in mind
- Automate test cases based on priority before the close of the sprint
- Participate in code review to ensure commits are up to standards
- Monitor the Azure Release for regression bugs and or issues with environments
- Work with geo-distributed teams to coordinate testing of features
- Be vocal during Retrospective meetings and follow up on process improvements
- Managing quality and bugs reports in all stages of releases
Our ideal candidate will have…
- 5+ years of experience as an SDET
- 3+ years of experience with Selenium WebDriver and Grid
- 2+ years of experience of testing web API through code
- Strong experience of OOP design with C# programming skill
- Ability to write complex SQL queries for data validation
- Knowledge of test methodologies and their corresponding tools
- Ability to recognize errors and assess risks within applications and or processes
- Working knowledge with Visual Studio 2016+ and Git
- 1+ year of experience with of CI/CD pipelines
- An understanding of the ROI and risk for ah-hoc testing, test automation, code coverage and
- feature coverage
- A passion for design, development, and quality.

Dear Connections,
We are hiring! Join our dynamic team as a QA Automation Tester (Python, Java, Selenium, API, SQL, Git)! We're seeking a passionate professional to contribute to our innovative projects. If you thrive in a collaborative environment, possess expertise in Python, Java, Selenium, and Robot Framework, and are ready to make an impact, apply now! Wissen Technology is committed to fostering innovation, growth, and collaboration. Don't miss this chance to be part of something extraordinary.
Company Overview:
Wissen is the preferred technology partner for executing transformational projects and accelerating implementation through thought leadership and a solution mindset. It is a leading IT solutions and consultancy firm dedicated to providing innovative and customized solutions to global clients. We leverage cutting-edge technologies to empower businesses and drive digital transformation.
#jobopportunity #hiringnow #joinourteam #career #wissen #QA #automationtester #robot #apiautomation #sql #java #python #selenium
Role Description
This is a full-time hybrid role as a GCP Data Engineer,. As a GCP Data Engineer, you will be responsible for managing large sets of structured and unstructured data and developing processes to convert data into insights, information, and knowledge.
Skill Name: GCP Data Engineer
Experience: 7-10 years
Notice Period: 0-15 days
Location :-Pune
If you have a passion for data engineering and possess the following , we would love to hear from you:
🔹 7 to 10 years of experience working on Software Development Life Cycle (SDLC)
🔹 At least 4+ years of experience in Google Cloud platform, with a focus on Big Query
🔹 Proficiency in Java and Python, along with experience in Google Cloud SDK & API Scripting
🔹 Experience in the Finance/Revenue domain would be considered an added advantage
🔹 Familiarity with GCP Migration activities and the DBT Tool would also be beneficial
You will play a crucial role in developing and maintaining our data infrastructure on the Google Cloud platform.
Your expertise in SDLC, Big Query, Java, Python, and Google Cloud SDK & API Scripting will be instrumental in ensuring the smooth operation of our data systems..
Join our dynamic team and contribute to our mission of harnessing the power of data to make informed business decisions.
Title: Technical Analyst - OTM
Experience: 3-9 Years
Work Location: Mohali
Shift Timing: Rotational Shift 24x5
Notice Period: Immediate to 30 days Max
Key Skills: OTM, OBIEE, BI Publisher, Oracle ERP
Job Description:
The Oracle Transportation Management Technical Analyst will share the responsibility for design, implementation, and support of business solutions based on Emerson’s instance of Oracle Transportation Management commonly referred to as SCO (Supply Chain Optimization). The Technical Analyst utilizes expertise in Oracle Transportation Management to provide assistance in the ongoing implementation, enhancement, and support of SCO functionality.
Roles and Responsibilities:
- Provide support (e.g., break/fix, how to expertise, enhancements, monitoring, testing, troubleshooting) for the SCO application.
- Works collaboratively with Corporate Logistics and SCO IT Program/Project Managers to understand program requirements and assist with the evaluation of alternative solutions.
- Assist with program rollout activities, including business unit and trading partner on-boarding, project coordination, status reporting and communication to program management.
- Proactively monitors processes to identify trends; analyses/predicts trends and develops a long-range plan designed to resolve problems and prevent them from recurring to ensure high service levels.
- Ensures SCO system documentation is complete and maintained.
- Works effectively in a global highly matrixed team environment.
Skills & Experience Required:
- 4 to 8 years of IT experience, including implementation of Oracle Transportation Management.
- OTM Expert, both Functionally and technically (Setup configuration, Order Management, Shipment management, Financials, rates, master data, bulk planning parameters, VPDs, user configuration, screen set development, SQL queries, Tracking Events and working with CSV & XML files).
- Hands on experience with triage of day-to-day OTM systems issues and providing resolution on complex issues.
- Knowledge of Logistics management principles and processes.
- Broad knowledge and experience with various ERP systems. Working knowledge of Oracle eBusiness Suite (Procurement, Shipping, XML Gateway) is highly preferred.
- Working knowledge of BI Publisher, FTI/OAC, OBIEE and ETL.
- Good knowledge of EDI and any other Middleware systems.
- Strong customer service orientation with strong written and verbal communication skills, including comfort with presenting to diverse technical and non-technical audiences at all levels of the organization.
- Ability to multi-task and work within diverse multi-disciplinary global project teams.
- Detail-oriented with strong problem-solving skills.
- Comfortable with performing detailed data analysis to identify opportunities and gain higher level insight.
- Knowledge on GTM (Global Trade Management) will be a plus.
- Some travel might be required.
Education
- Bachelor’s degree in computer science, Information Systems, or another related field.


Title/Role: Python Django Consultant
Experience: 8+ Years
Work Location: Indore / Pune /Chennai / Vadodara
Notice period: Immediate to 15 Days Max
Key Skills: Python, Django, Crispy Forms, Authentication, Bootstrap, jQuery, Server Side Rendered, SQL, Azure, React, Django DevOps
Job Description:
- Should have knowledge and created forms using Django. Crispy forms is a plus point.
- Must have leadership experience
- Should have good understanding of function based and class based views.
- Should have good understanding about authentication (JWT and Token authentication)
- Django – at least one senior with deep Django experience. The other 1 or 2 can be mid to senior python or Django
- FrontEnd – Must have React/ Angular, CSS experience
- Database – Ideally SQL but most senior has solid DB experience
- Cloud – Azure preferred but agnostic
- Consulting / client project background ideal.
Django Stack:
- Django
- Server Side Rendered HTML
- Bootstrap
- jQuery
- Azure SQL
- Azure Active Directory
- Server Side Rendered/jQuery is older tech but is what we are ok with for internal tools. This is a good combination of late adopter agile stack integrated within an enterprise. Potentially we can push them to React for some discreet projects or pages that need more dynamism.
Django Devops:
- Should have expertise with deploying and managing Django in Azure.
- Django deployment to Azure via Docker.
- Django connection to Azure SQL.
- Django auth integration with Active Directory.
- Terraform scripts to make this setup seamless.
- Easy, proven to deployment / setup to AWS, GCP.
- Load balancing, more advanced services, task queues, etc.
Position = Java Developer
We are looking forward to hire a committed Java Developer with experience in building high performing, scalable, enterprise-grade applications. You will be part of our Engineering team that works on mission-critical applications. You will be managing Java/Java EE application development while providing expertise in the full software development lifecycle, from concept and design to testing.
You are required to:
Contributing in all phases of the development lifecycle.
Write well designed, testable & efficient code.
Ensure designs are in compliance with specifications.
Prepare and produce releases of software components.
Support continuous improvement by investigating alternatives and technologies and presenting these for architectural review.
Technical Skills required
Java, Springboot, Microservices, Data structures & Algorithms, MySQL, NoSQL, Mongodb and Hibernate.
OUR CURRENT STACK
Backend: Spring (JAVA), Laravel (PHP), MySQL, NoSQL, NGINX Plus.
Frontend: Angular 5+ Ngrx/store 5
Infrastructure: Google cloud platform (App engine, CloudSQL, BigQuery, PubSub, Firebase Hosting), Scrapy Cloud, Pusher.io (websockets), Getstream.io, Filestack, Postmark app, AS2 Gateway, Google Cloud Endpoints Framework, MongoDB, Algolia, Memcache
Tools: Gitlab, Postman app, JIRA. Wondering what your Responsibilities would be? Technical Skills required O
You are where our search ends, if you hold:
B. Tech/ M. Tech or corresponding degree
Experience in the same role of almost 1-6 years
Experience with connecting backend and frontend services.
Exposure to consuming data through different interfaces (Web API's/Socket/ REST/ RESTFUL/ JSON/ XML).
Proficiency in Data Structures and Algorithms.
Understanding of web markup, including HTML 5 CSS.
Understanding of client-side scripting and JavaScript frameworks.
Ability to write clean, reusable and well documented code.
Proficient understanding of code versioning tools, such as Git.
Knowledge of API authentication techniques (Token, JWT, OAuth2) - desirable but not mandatory. (Experience with API Design will be a plus)
Fair spoken and written English Flexibility - Things change around here. FAST!
Other Inter-personal skills like self-motivation, persistency, patience and eagerness to learn and work independently.


Full Stack Developer Job Description
Position: Full Stack Developer
Department: Technology/Engineering
Location: Pune
Type: Full Time
Job Overview:
As a Full Stack Developer at Invvy Consultancy & IT Solutions, you will be responsible for both front-end and back-end development, playing a crucial role in designing and implementing user-centric web applications. You will collaborate with cross-functional teams including designers, product managers, and other developers to create seamless, intuitive, and high-performance digital solutions.
Responsibilities:
Front-End Development:
Develop visually appealing and user-friendly front-end interfaces using modern web technologies such as C# Coding, HTML5, CSS3, and JavaScript frameworks (e.g., React, Angular, Vue.js).
Collaborate with UX/UI designers to ensure the best user experience and responsive design across various devices and platforms.
Implement interactive features, animations, and dynamic content to enhance user engagement.
Optimize application performance for speed and scalability.
Back-End Development:
Design, develop, and maintain the back-end architecture using server-side technologies (e.g., Node.js, Python, Ruby on Rails, Java, .NET).
Create and manage databases, including data modeling, querying, and optimization.
Implement APIs and web services to facilitate seamless communication between front-end and back-end systems.
Ensure security and data protection by implementing proper authentication, authorization, and encryption measures.
Collaborate with DevOps teams to deploy and manage applications in cloud environments (e.g., AWS, Azure, Google Cloud).
Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent experience).
Proven experience as a Full Stack Developer or similar role.
Proficiency in front-end development technologies like HTML5, CSS3, JavaScript, and popular frameworks (React, Angular, Vue.js, etc.).
Strong experience with back-end programming languages and frameworks (Node.js, Python, Ruby on Rails, Java, .NET, etc.).
Familiarity with database systems (SQL and NoSQL) and their integration with web applications.
Knowledge of web security best practices and application performance optimization.
Proven work experience as a Database Programmer, Database Developer, or similar role.
Strong proficiency in SQL and hands-on experience with PostgreSQL.
Solid understanding of relational database concepts and principles.
Experience in database design, schema modeling, and optimization.
Familiarity with database administration tasks, such as user management, backup and recovery, and performance tuning.
Ability to write efficient SQL queries, stored procedures, and functions.
Detail-oriented with a focus on data accuracy and integrity.
Familiarity with software development methodologies and programming languages is a plus.

Job Location: Hyderabad/Bangalore/ Chennai/Pune/Nagpur
Notice period: Immediate - 15 days
1. Python Developer with Snowflake
Job Description :
- 5.5+ years of Strong Python Development Experience with Snowflake.
- Strong hands of experience with SQL ability to write complex queries.
- Strong understanding of how to connect to Snowflake using Python, should be able to handle any type of files
- Development of Data Analysis, Data Processing engines using Python
- Good Experience in Data Transformation using Python.
- Experience in Snowflake data load using Python.
- Experience in creating user-defined functions in Snowflake.
- Snowsql implementation.
- Knowledge of query performance tuning will be added advantage.
- Good understanding of Datawarehouse (DWH) concepts.
- Interpret/analyze business requirements & functional specification
- Good to have DBT, FiveTran, and AWS Knowledge.
Job Description: Core Java Developer
Company: Mobile Programming LLC
Location: Pune (Remote work available)
Salary: Up to 16 LPA
Position: Core Java Developer
Responsibilities:
- Design, develop, and maintain Java-based applications using Core Java and Spring Boot frameworks.
- Collaborate with cross-functional teams to deliver high-quality software solutions.
- Conduct code reviews and troubleshoot/debug complex issues.
- Optimize application performance and stay updated with industry trends.
Requirements:
- Minimum 6 years of hands-on Core Java development experience.
- Strong proficiency in Spring Boot framework.
- Solid understanding of OOP principles, web development (HTML/CSS/JavaScript), RESTful web services, and SQL.
- Experience with Git and problem-solving skills.
- Excellent communication skills and ability to work independently and as part of a team.
- Bachelor's or Master's degree in Computer Science or a related field.
Note: Immediate joiners required. Please include your resume and relevant code samples/projects when applying.
L2 Support
Location : Mumbai, Pune, Bangalore
Requirement details : (Mandatory Skills)
- Excell communication skills
- Production Support, Incident Management
- SQL ( Must have experience in writing complex queries )
- Unix ( Must have working experience on Linux operating system.
- Pearl/Shell Scripting
- Candidates working in the Investment Banking domain will be preferred
Title: Senior Business Analyst
Job Location: Hyderabad, Chennai, Cochin, Bangalore, Delhi, Kolkata, Pune
Experience: 9+ years
Important Note: We are hiring for a Techno-Functional Business Analyst with background as a Developer (preferably Java or .Net). We are looking for a candidate who is expertise in providing business solutions.
Business Analysts conduct market analyses, analyzing both product lines and the overall profitability of the business. In addition, they develop and monitor data quality metrics and ensure business data and reporting needs are met. Strong technology, analytical and communication skills are must-have traits
Essential Responsibilities:
- Evaluating business processes, anticipating requirements, uncovering areas for improvement, and developing and implementing solutions.
- Leading ongoing reviews of business processes and developing optimization strategies.
- Staying up to date on the latest process and IT advancements to automate and modernize systems.
- Conducting meetings and presentations to share ideas and findings.
- Understand the business requirements and documenting and translating into features / user stories.
- Ensure the system design is perfect as per the needs of the customer. Participating in functionality testing and user acceptance testing of the new features.
- Developing business artifacts in relate to the client business and conducting formal training sessions for the team.
- Acting as a coach on assigned projects and assignments; and providing business related direction and clarification to the developers and other project stakeholders.
- Develop a team culture where everyone thinks from end user perspective.
- Performing requirements analysis.
- Documenting and communicating the results of your efforts.
- Effectively communicating your insights and plans to cross-functional team members and management.
- Gathering critical information from meetings with various stakeholders and producing useful reports.
- Working closely with clients, technicians, and managerial staff.
- Providing leadership, training, coaching, and guidance to junior staff.
- Allocating resources and maintaining cost efficiency.
- Ensuring solutions meet business needs and requirements.
- Performing user acceptance testing.
- Managing projects, developing project plans, and monitoring performance.
- Updating, implementing, and maintaining procedures.
- Prioritizing initiatives based on business needs and requirements.
- Serving as a liaison between stakeholders and users.
- Managing competing resources and priorities.
- Monitoring deliverables and ensuring timely completion of projects.
Requirements:
- A bachelor’s degree in business or related field or an MBA.
- A minimum of 5 years of experience in business analysis or a related field.
- Should be from development background.
- should be a business solution expertise.
- Exceptional analytical and conceptual thinking skills.
- Insurance Domain experience is must.
- The ability to influence stakeholders and work closely with them to determine acceptable solutions.
- Advanced technical skills.
- Excellent documentation skills.
- Fundamental analytical and conceptual thinking skills.
- Experience creating detailed reports and giving presentations.
Responsibilities:
• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing
• Implementing Spark processing based ETL frameworks
• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption
• Modifying the Informatica-Teradata & Unix based data pipeline
• Enhancing the Talend-Hive/Spark & Unix based data pipelines
• Develop and Deploy Scala/Python based Spark Jobs for ETL processing
• Strong SQL & DWH concepts.
Preferred Background:
• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs
• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives
• Understanding of EDW system of business and creating High level design document and low level implementation document
• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document
• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption