50+ SQL Jobs in Bangalore (Bengaluru) | SQL Job openings in Bangalore (Bengaluru)
Apply to 50+ SQL Jobs in Bangalore (Bengaluru) on CutShort.io. Explore the latest SQL Job opportunities across top companies like Google, Amazon & Adobe.




About the Role
We are seeking a motivated and knowledgeable Data Science Teaching Assistant Intern to support our academic team in delivering high-quality learning experiences. This role is ideal for someone who enjoys teaching, solving problems, and wants to gain hands-on experience in the EdTech and Data Science domain.
As a Teaching Assistant, you'll help learners understand complex data science topics, resolve doubts, assist during live classes, and contribute to high-quality content development.
Opportunity to receive a Pre-Placement Offer (PPO) based on performance.
Key Responsibilities
Assist instructors during live classes by providing support and addressing learners queries.
Conduct doubt-solving sessions to help learners grasp difficult concepts in Data Science, Python, Machine Learning, and related topics.
Contribute to content creation and review, including assignments, quizzes, and learning materials.
Provide one-on-one academic support and mentoring to learners when needed.
Ensure a positive and engaging learning environment during sessions.
Requirements
Bachelor's in Data Science, CSE, Statistics, or a related field
Strong foundation in Python, Statistics, Machine Learning, and Data Analysis.
Excellent communication and interpersonal skills.
Ability to break down technical concepts into simple explanations.
Prior experience in teaching, mentoring, or assisting is a plus.
Passionate about education and helping others learn.
Perks
Hands-on teaching and mentoring experience.
Exposure to real-time learners interaction and feedback.
Mentorship from senior instructors and data science professionals.
Opportunity to receive a Pre-Placement Offer (PPO) based on performance.

Primary skill set: QA Automation, Python, BDD, SQL
As Senior Data Quality Engineer you will:
- Evaluate product functionality and create test strategies and test cases to assess product quality.
- Work closely with the on-shore and the offshore team.
- Work on multiple reports validation against the databases by running medium to complex SQL queries.
- Better understanding of Automation Objects and Integrations across various platforms/applications etc.
- Individual contributor exploring opportunities to improve performance and suggest/articulate the areas of improvements importance/advantages to management.
- Integrate with SCM infrastructure to establish a continuous build and test cycle using CICD tools.
- Comfortable working on Linux/Windows environment(s) and Hybrid infrastructure models hosted on Cloud platforms.
- Establish processes and tools set to maintain automation scripts and generate regular test reports.
- Peer review to provide feedback and to make sure the test scripts are flaw-less.
Core/Must have skills:
- Excellent understanding and hands on experience in ETL/DWH testing preferably DataBricks paired with Python experience.
- Hands on experience SQL (Analytical Functions and complex queries) along with knowledge of using SQL client utilities effectively.
- Clear & crisp communication and commitment towards deliverables
- Experience on BigData Testing will be an added advantage.
- Knowledge on Spark and Scala, Hive/Impala, Python will be an added advantage.
Good to have skills:
- Test automation using BDD/Cucumber / TestNG combined with strong hands-on experience with Java with Selenium. Especially working experience in WebDriver.IO
- Ability to effectively articulate technical challenges and solutions
- Work experience in qTest, Jira, WebDriver.IO
Job Title : Senior Software Engineer – Backend
Experience Required : 6 to 12 Years
Location : Bengaluru (Hybrid – 3 Days Work From Office)
Number of Openings : 2
Work Hours : 11:00 AM – 8:00 PM IST
Notice Period : 30 Days Preferred
Work Location : SmartWorks The Cube, Karle Town SEZ, Building No. 5, Nagavara, Bangalore – 560045
Note : Face-to-face interview in Bangalore is mandatory during the second round.
Role Overview :
We are looking for an experienced Senior Backend Developer to join our growing team. This is a hands-on role focused on building cloud-based, scalable applications in the mortgage finance domain.
Key Responsibilities :
- Design, develop, and maintain backend components for cloud-based web applications.
- Contribute to architectural decisions involving microservices and distributed systems.
- Work extensively with Node.js and RESTful APIs.
- Implement scalable solutions using AWS services (e.g., Lambda, SQS, SNS, RDS).
- Utilize both relational and NoSQL databases effectively.
- Collaborate with cross-functional teams to deliver robust and maintainable code.
- Participate in agile development practices and deliver rapid iterations based on feedback.
- Take ownership of system performance, scalability, and reliability.
Core Requirements :
- 5+ Years of total experience in backend development.
- Minimum 3 Years of experience in building scalable microservices or delivering large-scale products.
- Strong expertise in Node.js and REST APIs.
- Solid experience with RDBMS, SQL, and data modeling.
- Good understanding of distributed systems, scalability, and availability.
- Familiarity with AWS infrastructure and services.
- Development experience in Python and/or Java is a plus.
Preferred Skills :
- Experience with frontend frameworks like React.js or AngularJS.
- Working knowledge of Docker and containerized applications.
Interview Process :
- Round 1 : Online technical assessment (1 hour)
- Round 2 : Virtual technical interview
- Round 3 : In-person interview at the Bangalore office (2 hours – mandatory)


Job title - Python developer
Exp – 4 to 6 years
Location – Pune/Mum/B’lore
PFB JD
Requirements:
- Proven experience as a Python Developer
- Strong knowledge of core Python and Pyspark concepts
- Experience with web frameworks such as Django or Flask
- Good exposure to any cloud platform (GCP Preferred)
- CI/CD exposure required
- Solid understanding of RESTful APIs and how to build them
- Experience working with databases like Oracle DB and MySQL
- Ability to write efficient SQL queries and optimize database performance
- Strong problem-solving skills and attention to detail
- Strong SQL programing (stored procedure, functions)
- Excellent communication and interpersonal skill
Roles and Responsibilities
- Design, develop, and maintain data pipelines and ETL processes using pyspark
- Work closely with data scientists and analysts to provide them with clean, structured data.
- Optimize data storage and retrieval for performance and scalability.
- Collaborate with cross-functional teams to gather data requirements.
- Ensure data quality and integrity through data validation and cleansing processes.
- Monitor and troubleshoot data-related issues to ensure data pipeline reliability.
- Stay up to date with industry best practices and emerging technologies in data engineering.

Role: Data Engineer (14+ years of experience)
Location: Whitefield, Bangalore
Mode of Work: Hybrid (3 days from office)
Notice period: Immediate/ Serving with 30days left
Location: Candidate should be based out of Bangalore as one round has to be taken F2F
Job Summary:
Role and Responsibilities
● Design and implement scalable data pipelines for ingesting, transforming, and loading data from various tools and sources.
● Design data models to support data analysis and reporting.
● Automate data engineering tasks using scripting languages and tools.
● Collaborate with engineers, process managers, data scientists to understand their needs and design solutions.
● Act as a bridge between the engineering and the business team in all areas related to Data.
● Automate monitoring and alerting mechanism on data pipelines, products and Dashboards and troubleshoot any issues. On call requirements.
● SQL creation and optimization - including modularization and optimization which might need views, table creation in the sources etc.
● Defining best practices for data validation and automating as much as possible; aligning with the enterprise standards
● QA environment data management - e.g Test Data Management etc
Qualifications
● 14+ years of experience as a Data engineer or related role.
● Experience with Agile engineering practices.
● Strong experience in writing queries for RDBMS, cloud-based data warehousing solutions like Snowflake and Redshift.
● Experience with SQL and NoSQL databases.
● Ability to work independently or as part of a team.
● Experience with cloud platforms, preferably AWS.
● Strong experience with data warehousing and data lake technologies (Snowflake)
● Expertise in data modelling
● Experience with ETL/LT tools and methodologies .
● 5+ years of experience in application development including Python, SQL, Scala, or Java
● Experience working on real-time Data Streaming and Data Streaming platform.
NOTE: IT IS MANDATORY TO GIVE ONE TECHNICHAL ROUND FACE TO FACE.
Role overview
1) Overall 5 to 7 years of experience. Node.js experience is must.
2) At least 3+ years of experience or couple of large-scale products delivered on microservices.
3) Strong design skills on microservices and AWS platform infrastructure.
4) Excellent programming skill in Python, Node.js and Java.
5) Hands on development in rest API’s.
6) Good understanding of nuances of distributed systems, scalability, and availability.
Job Title : Flutter Dart Developer (Backend Heavy - Node.js)
Experience Required : 5+ Years
Location : Bellandur & Manthali, Bangalore – Onsite Only
Type : Contractual
About the Role :
We are looking for an experienced Flutter Dart Developer with a backend-heavy architecture (Node.js) to join our team on a contractual basis.
This role goes beyond basic UI development — we need someone who understands the complexities of security, caching, APIs, SQL, server-side rendering, performance tuning, and scalable backend architecture.
Mandatory Skills : Flutter, Dart, Backend-heavy architecture (Nodejs), RESTful APIs, SQL, Caching, Firebase, State Management (Bloc/Provider/Riverpod), Performance Tuning, Git, Mobile Deployment (iOS & Android), Agile
Key Responsibilities :
- Develop high-performance cross-platform apps using Flutter & Dart.
- Translate complex UI/UX into responsive mobile experiences.
- Collaborate with product, design, and backend (Node.js) teams to deliver scalable features.
- Implement caching, security, SQL, and performance optimization strategies.
- Integrate RESTful APIs, Firebase, and third-party libraries.
- Conduct code reviews and support junior developers.
- Stay updated on emerging mobile/backend technologies.
Required Skills & Qualifications :
- 5+ Years in Mobile Development, 3+ Years with Flutter & Dart.
- Strong knowledge of state management (Bloc, Provider, Riverpod).
- Hands-on experience in Node.js for backend development.
- Expertise in API design, SQL, caching, offline storage, security, and performance tuning.
- Experience with Firebase, push notifications, and app deployment on iOS & Android.
- Familiarity with native mobile development (Kotlin/Java or Swift/Obj-C) is a plus.
- Proficient in Git and Agile methodologies with excellent problem-solving skills.

Role descriptions / Expectations from the Role
· 6-7 years of IT development experience with min 3+ years hands-on experience in Snowflake
· Strong experience in building/designing the data warehouse or data lake, and data mart end-to-end implementation experience focusing on large enterprise scale and Snowflake implementations on any of the hyper scalers.
· Strong experience with building productionized data ingestion and data pipelines in Snowflake
· Good knowledge of Snowflake's architecture, features likie Zero-Copy Cloning, Time Travel, and performance tuning capabilities
· Should have good exp on Snowflake RBAC and data security.
· Strong experience in Snowflake features including new snowflake features.
· Should have good experience in Python/Pyspark.
· Should have experience in AWS services (S3, Glue, Lambda, Secrete Manager, DMS) and few Azure services (Blob storage, ADLS, ADF)
· Should have experience/knowledge in orchestration and scheduling tools experience like Airflow
· Should have good understanding on ETL or ELT processes and ETL tools.
Strong written/verbal communication skills
· Minimum 5+ Years of Core Java Programming with Collections Framework, Concurrent Programming, Multi-threading (Good knowledge in Executor service, Fork joins pool and other threading concepts)
· Good knowledge of the JVM with an understanding of performance and memory optimization.
· Extensive and expert programming experience in JAVA programming language (strong OO skills preferred).
· Excellent knowledge on collections like, Array List, Vector, LinkedList, HashMap, Hash Table, HashSet is mandate.
· Exercised exemplary development practices including design specification, coding standards, unit testing, and code-reviews.
· Expert level understanding of Object-Oriented Concepts and Data Structures
· Good experience in Database (Sybase, Oracle or SQL Server) like indexing (clustered, non-clustered), hashing, segmenting, data types like clob / blob, views (materialized), replication, constraints, functions, triggers, procedures etc.
- Experience in Core Java 5.0 and above, CXF, Spring.
- Extensive experience in developing enterprise-scale n-tier applications for financial domain. Should possess good architectural knowledge and be aware of enterprise application design patterns.
- Should have the ability to analyze, design, develop and test complex, low-latency client-facing applications.
- Good Experience into Microservices , Data structures , Oops , Algorithms, multithreading etc
- Good development experience with RDBMS, preferably Sybase database.
- Good knowledge of multi-threading and high-volume server-side development.
- Experience in sales and trading platforms in investment banking/capital markets.
- Basic working knowledge of Unix/Linux.
- Experience into High /Low level designing.
- Excellent problem solving and coding skills in Java.
- Strong interpersonal, communication and analytical skills.
- Should have the ability to express their design ideas and thoughts.

Position : Senior Data Analyst
Experience Required : 5 to 8 Years
Location : Hyderabad or Bangalore (Work Mode: Hybrid – 3 Days WFO)
Shift Timing : 11:00 AM – 8:00 PM IST
Notice Period : Immediate Joiners Only
Job Summary :
We are seeking a highly analytical and experienced Senior Data Analyst to lead complex data-driven initiatives that influence key business decisions.
The ideal candidate will have a strong foundation in data analytics, cloud platforms, and BI tools, along with the ability to communicate findings effectively across cross-functional teams. This role also involves mentoring junior analysts and collaborating closely with business and tech teams.
Key Responsibilities :
- Lead the design, execution, and delivery of advanced data analysis projects.
- Collaborate with stakeholders to identify KPIs, define requirements, and develop actionable insights.
- Create and maintain interactive dashboards, reports, and visualizations.
- Perform root cause analysis and uncover meaningful patterns from large datasets.
- Present analytical findings to senior leaders and non-technical audiences.
- Maintain data integrity, quality, and governance in all reporting and analytics solutions.
- Mentor junior analysts and support their professional development.
- Coordinate with data engineering and IT teams to optimize data pipelines and infrastructure.
Must-Have Skills :
- Strong proficiency in SQL and Databricks
- Hands-on experience with cloud data platforms (AWS, Azure, or GCP)
- Sound understanding of data warehousing concepts and BI best practices
Good-to-Have :
- Experience with AWS
- Exposure to machine learning and predictive analytics
- Industry-specific analytics experience (preferred but not mandatory)
We’re hiring a Maximo Technical Lead with hands-on experience in Maximo 7.6 or higher, Java, and Oracle DB. The role involves leading Maximo implementations, upgrades, and support projects, especially for manufacturing clients.
Key Skills:
IBM Maximo (MAS 8.x preferred)
Java, Oracle 12c+, WebSphere
Maximo Mobile / Asset Management / Cognos / BIRT
SQL, scripting, troubleshooting
Experience leading tech teams and working with clients
Good to Have:
IBM Maximo Certification
MES/Infrastructure planning knowledge
Experience with Rail or Manufacturing domain
Job Title: Backend Developer
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Backend Developer with a minimum of 1 year of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that drive our applications. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components, and your expertise will be critical in architecting scalable, secure, and high-performance backend solutions.
Annual Compensation: 6-10 LPA
Responsibilities:
- Design, develop, and maintain scalable and efficient backend systems and APIs using NodeJS.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis).
- Promoting a culture of collaboration, knowledge sharing, and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and contribute to architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 1 year of proven experience as a Backend Developer, with a strong portfolio of product-building projects.
- Extensive experience with JavaScript backend frameworks (e.g., Express, Socket) and a deep understanding of their ecosystems.
- Strong expertise in SQL and NoSQL databases (MySQL and MongoDB) with a focus on data modeling and scalability.
- Practical experience with Redis and caching mechanisms to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.

About Role
We are seeking a skilled Backend Engineer with 2+ years of experience to join our dynamic team, focusing on building scalable web applications using Python frameworks (Django/FastAPI) and cloud technologies. You'll be instrumental in developing and maintaining our cloud-native backend services.
Responsibilities:
- Design and develop scalable backend services using Django and FastAPI
- Create and maintain RESTful APIs
- Implement efficient database schemas and optimize queries
- Implement containerisation using Docker and container orchestration
- Design and implement cloud-native solutions using microservices architecture
- Participate in technical design discussions, code reviews and maintain coding standards
- Document technical specifications and APIs
- Collaborate with cross-functional teams to gather requirements, prioritise tasks, and contribute to project completion.
Requirements:
- Experience with Django and/or Fast-API (2+ years)
- Proficiency in SQL and ORM frameworks
- Docker containerisation and orchestration
- Proficiency in shell scripting (Bash/Power-Shell)
- Understanding of micro-services architecture
- Experience building server-less back end
- Knowledge of deployment and debugging on cloud platforms (AWS/Azure)


Role Overview
We’re looking for a Data Analyst who is excited to work at the intersection of data, technology, and women’s wellness. You'll be instrumental in helping us understand user behaviour, community engagement, campaign performance, and product usage across platforms — including app, web, and WhatsApp.
You’ll also have opportunities to collaborate on AI-powered features such as chatbots and personalized recommendations. Experience with GenAI or NLP is a plus but not a requirement.
Key Responsibilities
· Clean, transform, and analyse data from multiple sources (SQL databases, CSVs, APIs).
· Build dashboards and reports to track KPIs, user behaviour, and marketing performance.
· Collaborate with product, marketing, and customer teams to uncover actionable insights.
· Support experiments, A/B testing, and cohort analysis to drive growth and retention.
· Assist in documentation and communication of findings to technical and non-technical teams.
· Work with the data team to enhance personalization and AI features (optional).
Required Qualifications
· Bachelor’s degree in Data Science, Statistics, Computer Science, or a related field.
· 2 – 4 years of experience in data analysis or business intelligence.
· Strong hands-on experience with SQL and Python (pandas, NumPy, matplotlib).
· Familiarity with data visualization tools (Streamlit, Tableau, Metabase, Power BI, etc.)
· Ability to translate complex data into simple visual stories and clear recommendations.
· Strong attention to detail and a mindset for experimentation.
Preferred (Not Mandatory)
· Exposure to GenAI, LLMs (e.g., OpenAI, HuggingFace), or NLP concepts.
· Experience working with healthcare, wellness, or e-commerce datasets.
· Familiarity with REST APIs, JSON structures, or chatbot systems.
· Interest in building tools that impact women’s health and wellness.
Why Join Us?
· Be part of a high-growth startup tackling a real need in women’s healthcare.
· Work with a passionate, purpose-driven team.
· Opportunity to grow into GenAI/ML-focused roles as we scale.
· Competitive salary and career progression
Best Regards,
Indrani Dutta
MIROR THERAPEUTICS PRIVATE LIMITED
Role
Snowflake Architect
Required Technical Skill Set
· 4-8 years of total experience and at least 3+ years of expertise in Cloud data warehouse technologies on Snowflake and AWS, Azure or GCP.
· At least one End-to-end Snowflake implementation is a must covering all aspects including architecture, design, data engineering, data visualization and data governance (specifically data quality and lineage).
· Significant experience with data migrations and development of Operational Data Stores, Enterprise Data Warehouses and Data Marts.
· Good hands-on knowledge on SQL and Data Warehousing life cycle is an absolute requirement.
· Significant experience with data migrations and development, design, Operational Data Stores, Enterprise Data Warehouses and Data Marts.
· Experience with cloud ETL and ELT in one of the tools like DBT/Glue/ADF or Matillion or any other ELT tool and exposure to Bigdata ecosystem (Hadoop).
· Expertise with at least one of the Traditional data warehouses solutions on Oracle, Teradata, and Microsoft SQL server.
· Excellent communication skills to liaise with Business & IT stakeholders.
· Expertise in planning execution of a project and efforts estimation.
· Understanding of Data Vault, data mesh and data fabric architecture patterns.
· Exposure to working in Agile ways of working.

We are looking for an experienced and detail-oriented Senior Performance Testing Engineer to join our QA team. The ideal candidate will be responsible for designing, developing, and executing scalable and reliable performance testing strategies. You will lead performance engineering initiatives using tools like Locust, Python, Docker, Kubernetes, and cloud-native environments (AWS), ensuring our systems meet performance SLAs under real-world usage patterns.
Key Responsibilities
- Develop, enhance, and maintain Locust performance scripts using Python
- Design realistic performance scenarios simulating real-world traffic and usage patterns
- Parameterize and modularize scripts for robustness and reusability
- Execute performance tests in containerized environments using Docker and Kubernetes
- Manage performance test execution on Kubernetes clusters
- Integrate performance tests into CI/CD pipelines in collaboration with DevOps and Development teams
- Analyze performance test results, including throughput, latency, response time, and error rates
- Identify performance bottlenecks, conduct root cause analysis, and suggest optimizations
- Work with AWS (or other cloud platforms) to deploy, scale, and monitor tests in cloud-native environments
- Write and optimize complex SQL queries, stored procedures, and perform DB performance testing
- Work with SQL Server extensively; familiarity with Postgres is a plus
- Develop and maintain performance testing strategies and test plans
- Define and track KPIs, SLAs, workload models, and success criteria
- Guide the team on best practices and promote a performance engineering mindset
Must-Have Qualifications
- Proven hands-on experience with Locust and Python for performance testing
- Working knowledge of microservices architecture
- Hands-on with Kubernetes and Docker, especially in the context of running Locust at scale
- Experience integrating performance tests in CI/CD pipelines
- Strong experience with AWS or similar cloud platforms for deploying and scaling tests
- Solid understanding of SQL Server, including tuning stored procedures and query optimization
- Strong experience in performance test planning, execution, and analysis
Good-to-Have Skills
- Exposure to Postgres DB
- Familiarity with observability tools like Prometheus, Grafana, CloudWatch, and Datadog
- Basic knowledge of APM (Application Performance Monitoring) tools

Job Description:
Years of Experience:- 5-8 Years
Location: Bangalore
Job Role:- Database Developer
Primary Skill - Database, SQL
Secondary skill - DB2 and Python
Skills:
Main Pointers for Database Developer role.
*Should have strong working experience on any Database like DB2(Good to Have) and SQL OR Oracle/ PL SQL etc.
*Should have working experience on performance tuning
Job Title : Cognos BI Developer
Experience : 6+ Years
Location : Bangalore / Hyderabad (Hybrid)
Notice Period : Immediate Joiners Preferred (Candidates serving notice with 10–15 days left can be considered)
Interview Mode : Virtual
Job Description :
We are seeking an experienced Cognos BI Developer with strong data modeling, dashboarding, and reporting expertise to join our growing team. The ideal candidate should have a solid background in business intelligence, data visualization, and performance analysis, and be comfortable working in a hybrid setup from Bangalore or Hyderabad.
Mandatory Skills :
Cognos BI, Framework Manager, Cognos Dashboarding, SQL, Data Modeling, Report Development (charts, lists, cross tabs, maps), ETL Concepts, KPIs, Drill-through, Macros, Prompts, Filters, Calculations.
Key Responsibilities :
- Understand business requirements in the BI context and design data models using Framework Manager to transform raw data into meaningful insights.
- Develop interactive dashboards and reports using Cognos Dashboard.
- Identify and define KPIs and create reports to monitor them effectively.
- Analyze data and present actionable insights to support business decision-making.
- Translate business requirements into technical specifications and determine timelines for execution.
- Design and develop models in Framework Manager, publish packages, manage security, and create reports based on these packages.
- Develop various types of reports, including charts, lists, cross tabs, and maps, and design dashboards combining multiple reports.
- Implement reports using macros, prompts, filters, and calculations.
- Perform data warehouse development activities and ensure seamless data flow.
- Write and optimize SQL queries to investigate data and resolve performance issues.
- Utilize Cognos features such as master-detail reports, drill-throughs, bookmarks, and page sets.
- Analyze and improve ETL processes to enhance data integration.
- Apply technical enhancements to existing BI systems to improve their performance and usability.
- Possess solid understanding of database fundamentals, including relational and multidimensional database design.
- Hands-on experience with Cognos Data Modules (data modeling) and dashboarding.

Job Title : Python Data Engineer
Experience : 4+ Years
Location : Bangalore / Hyderabad (On-site)
Job Summary :
We are seeking a skilled Python Data Engineer to work on cloud-native data platforms and backend services.
The role involves building scalable APIs, working with diverse data systems, and deploying containerized services using modern cloud infrastructure.
Mandatory Skills : Python, AWS, RESTful APIs, Microservices, SQL/PostgreSQL/NoSQL, Docker, Kubernetes, CI/CD (Jenkins/GitLab CI/AWS CodePipeline)
Key Responsibilities :
- Design, develop, and maintain backend systems using Python.
- Build and manage RESTful APIs and microservices architectures.
- Work extensively with AWS cloud services for deployment and data storage.
- Implement and manage SQL, PostgreSQL, and NoSQL databases.
- Containerize applications using Docker and orchestrate with Kubernetes.
- Set up and maintain CI/CD pipelines using Jenkins, GitLab CI, or AWS CodePipeline.
- Collaborate with teams to ensure scalable and reliable software delivery.
- Troubleshoot and optimize application performance.
Must-Have Skills :
- 4+ years of hands-on experience in Python backend development.
- Strong experience with AWS cloud infrastructure.
- Proficiency in building microservices and APIs.
- Good knowledge of relational and NoSQL databases.
- Experience with Docker and Kubernetes.
- Familiarity with CI/CD tools and DevOps processes.
- Strong problem-solving and collaboration skills.

Role overview
- Overall 5 to 7 years of experience. Node.js experience is must.
- At least 3+ years of experience or couple of large-scale products delivered on microservices.
- Strong design skills on microservices and AWS platform infrastructure.
- Excellent programming skill in Python, Node.js and Java.
- Hands on development in rest API’s.
- Good understanding of nuances of distributed systems, scalability, and availability.
What would you do here
- To Work as a Backend Developer in developing Cloud Web Applications
- To be part of the team working on various types of web applications related to Mortgage Finance.
- Experience in solving a real-world problem of Implementing, Designing and helping develop a new Enterprise-class Product from ground-up.
- You have expertise in the AWS Cloud Infrastructure and Micro-services architecture around the AWS Service stack like Lambdas, SQS, SNS, MySQL Databases along with Dockers and containerized solutions/applications.
- Experienced in Relational and No-SQL databases and scalable design.
- Experience in solving challenging problems by developing elegant, maintainable code.
- Delivered rapid iterations of software based on user feedback and metrics.
- Help the team make key decisions on our product and technology direction.
- You actively contribute to the adoption of frameworks, standards, and new technologies.
Job Title : Oracle Analytics Cloud (OAC) / Fusion Data Intelligence (FDI) Specialist
Experience : 3 to 8 years
Location : All USI locations – Hyderabad, Bengaluru, Mumbai, Gurugram (preferred) and Pune, Chennai, Kolkata
Work Mode : Hybrid Only (2-3 days from office or all 5 days from office)
Mandatory Skills : Oracle Analytics Cloud (OAC), Fusion Data Intelligence (FDI), RPD, OAC Reports, Data Visualizations, SQL, PL/SQL, Oracle Databases, ODI, Oracle Cloud Infrastructure (OCI), DevOps tools, Agile methodology.
Key Responsibilities :
- Design, develop, and maintain solutions using Oracle Analytics Cloud (OAC).
- Build and optimize complex RPD models, OAC reports, and data visualizations.
- Utilize SQL and PL/SQL for data querying and performance optimization.
- Develop and manage applications hosted on Oracle Cloud Infrastructure (OCI).
- Support Oracle Cloud migrations, OBIEE upgrades, and integration projects.
- Collaborate with teams using the ODI (Oracle Data Integrator) tool for ETL processes.
- Implement cloud scripting using CURL for Oracle Cloud automation.
- Contribute to the design and implementation of Business Continuity and Disaster Recovery strategies for cloud applications.
- Participate in Agile development processes and DevOps practices including CI/CD and deployment orchestration.
Required Skills :
- Strong hands-on expertise in Oracle Analytics Cloud (OAC) and/or Fusion Data Intelligence (FDI).
- Deep understanding of data modeling, reporting, and visualization techniques.
- Proficiency in SQL, PL/SQL, and relational databases on Oracle.
- Familiarity with DevOps tools, version control, and deployment automation.
- Working knowledge of Oracle Cloud services, scripting, and monitoring.
Good to Have :
- Prior experience in OBIEE to OAC migrations.
- Exposure to data security models and cloud performance tuning.
- Certification in Oracle Cloud-related technologies.

Job Title : IBM Sterling Integrator Developer
Experience : 3 to 5 Years
Locations : Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, Pune
Employment Type : Full-Time
Job Description :
We are looking for a skilled IBM Sterling Integrator Developer with 3–5 years of experience to join our team across multiple locations.
The ideal candidate should have strong expertise in IBM Sterling and integration, along with scripting and database proficiency.
Key Responsibilities :
- Develop, configure, and maintain IBM Sterling Integrator solutions.
- Design and implement integration solutions using IBM Sterling.
- Collaborate with cross-functional teams to gather requirements and provide solutions.
- Work with custom languages and scripting to enhance and automate integration processes.
- Ensure optimal performance and security of integration systems.
Must-Have Skills :
- Hands-on experience with IBM Sterling Integrator and associated integration tools.
- Proficiency in at least one custom scripting language.
- Strong command over Shell scripting, Python, and SQL (mandatory).
- Good understanding of EDI standards and protocols is a plus.
Interview Process :
- 2 Rounds of Technical Interviews.
Additional Information :
- Open to candidates from Hyderabad, Bangalore, Mumbai, Gurgaon, Chennai, and Pune.

Job Summary:
As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining scalable, high-performance data pipelines using AWS services. With 6+ years of experience, you’ll collaborate closely with data architects, analysts, and business stakeholders to build reliable, secure, and cost-efficient data infrastructure across the organization.
Key Responsibilities:
- Design, develop, and manage scalable data pipelines using AWS Glue, Lambda, and other serverless technologies
- Implement ETL workflows and transformation logic using PySpark and Python on AWS Glue
- Leverage AWS Redshift for warehousing, performance tuning, and large-scale data queries
- Work with AWS DMS and RDS for database integration and migration
- Optimize data flows and system performance for speed and cost-effectiveness
- Deploy and manage infrastructure using AWS CloudFormation templates
- Collaborate with cross-functional teams to gather requirements and build robust data solutions
- Ensure data integrity, quality, and security across all systems and processes
Required Skills & Experience:
- 6+ years of experience in Data Engineering with strong AWS expertise
- Proficient in Python and PySpark for data processing and ETL development
- Hands-on experience with AWS Glue, Lambda, DMS, RDS, and Redshift
- Strong SQL skills for building complex queries and performing data analysis
- Familiarity with AWS CloudFormation and infrastructure as code principles
- Good understanding of serverless architecture and cost-optimized design
- Ability to write clean, modular, and maintainable code
- Strong analytical thinking and problem-solving skills

- Strong Snowflake Cloud database experience Database developer.
- Knowledge of Spark and Databricks is desirable.
- Strong technical background in data modelling, database design and optimization for data warehouses, specifically on column oriented MPP architecture
- Familiar with technologies relevant to data lakes such as Snowflake
- Candidate should have strong ETL & database design/modelling skills.
- Experience creating data pipelines
- Strong SQL skills and debugging knowledge and Performance Tuning exp.
- Experience with Databricks / Azure is add on /good to have .
- Experience working with global teams and global application environments
- Strong understanding of SDLC methodologies with track record of high quality deliverables and data quality, including detailed technical design documentation desired
Java Developer – Job Description
Wissen Technology is now hiring for a Java Developer - Bangalore with hands-on experience in Core Java, algorithms, data structures, multithreading and SQL. We are solving complex technical problems in the industry and need talented software engineers to join our mission and be a part of a global software development team. A brilliant opportunity to become a part of a highly motivated and expert team which has made a mark as a high-end technical consulting. Required Skills: • Exp. - 5 to 12 years. • Experience in Core Java and Spring Boot. • Extensive experience in developing enterprise-scale applications and systems. Should possess good architectural knowledge and be aware of enterprise application design patterns. • Should have the ability to analyze, design, develop and test complex, low-latency client facing applications. • Good development experience with RDBMS. • Good knowledge of multi-threading and high-performance server-side development. • Basic working knowledge of Unix/Linux. • Excellent problem solving and coding skills. • Strong interpersonal, communication and analytical skills. • Should have the ability to express their design ideas and thoughts. About Wissen Technology: Wissen Technology is a niche global consulting and solutions company that brings unparalleled domain expertise in Banking and Finance, Telecom and Startups. Wissen Technology is a part of Wissen Group and was established in the year 2015. Wissen has offices in the US, India, UK, Australia, Mexico, and Canada, with best-in-class infrastructure and development facilities. Wissen has successfully delivered projects worth $1 Billion for more than 25 of the Fortune 500 companies. The Wissen Group overall includes more than 4000 highly skilled professionals. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. Our team consists of 1200+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world. Wissen Technology offers an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation. We have been certified as a Great Place to Work® for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider.
Job Title : Senior Salesforce Marketing Cloud Developer
Experience : 5–8 years
Location : Bangalore (On-site)
Notice Period : Immediate to 15 Days Only
Job Description :
We are looking for a skilled Senior Salesforce Marketing Cloud Developer with 5 to 8 years of hands-on experience.
The ideal candidate will be responsible for designing and implementing personalized marketing journeys, automation workflows, and multi-channel campaigns using Salesforce Marketing Cloud (SFMC).
This role requires strong expertise in APIs, scripting, and data handling to build scalable, personalized customer experiences.
Must-Have Skills :
- Salesforce Marketing Cloud (Automation Studio, Journey Builder, Email Studio, Mobile Studio)
- API Integration
- Strong SQL skills
- AMPscript & JavaScript
Good-to-Have Skills :
- HTML & CSS
- Interaction Studio / Marketing Cloud Personalization
Skill Name: ETL Automation Testing
Location: Bangalore, Chennai and Pune
Experience: 5+ Years
Required:
Experience in ETL Automation Testing
Strong experience in Pyspark.

Required Skills:
- Hands-on experience with Databricks, PySpark
- Proficiency in SQL, Python, and Spark.
- Understanding of data warehousing concepts and data modeling.
- Experience with CI/CD pipelines and version control (e.g., Git).
- Fundamental knowledge of any cloud services, preferably Azure or GCP.
Good to Have:
- Bigquery
- Experience with performance tuning and data governance.
- Extract Transform Load (ETL) and ETL Tools skills
- Data Modeling and Data Integration expertise
- Data Warehousing knowledge
- Experience in working with SQL databases
- Strong analytical and problem-solving abilities
- Excellent communication and interpersonal skills
- Bachelor's degree in Computer Science, Information Systems, or related field
- Relevant certifications in ETL Testing or Data Warehousing
Manual Tester with Crew Domain Knowledge
Skill Set/Experience
1. • Experience: 5–12 Years
2. • Locations: Chennai, Bangalore, Trivandrum, Cochin
3. • Notice Period: Immediate to <30 days
- Very good understanding of airline Crew and Ops business processes
- Experience in UAT and SIT testing including on-site customer interactions (added advantage).
- Strong business/testing skills – Use case review, Test case creation (Interface, Functional and UAT and End to End ) , review and execution
- Basic knowledge of SQL, UNIX
- Experience in API/Interfaces testing
- Experience with any of the defect tracking systems.
- Exposure in using selenium Test Automation tool is added advantage

We are looking for:
• 2+ years of expertise in software development with one or more of the general programming languages (e.g., Python, Java, C/C++, Go). Experience in Python and Django is recommended.
• Deep understanding of how to build an application with optimized RESTful APIs.
• Knowledge of a web framework like Django or similar with ORM or multi-tier, multi-DB-based data-heavy web application development will help your profile stand out.
• Knowledge of Gen AI tools and technologies is a plus.
• Sound knowledge of SQL queries & DB like PostgreSQL(must) or MySQL. Working knowledge of NoSQL DBs (Elasticsearch, Mongo, Redis, etc.) is a plus.
• Knowledge of graph DB like Neo4j or AWS Neptune adds extra credits to your profile.
• Knowing queue-based messaging frameworks like Celery, RQ, Kafka, etc., and distributed system understanding will be advantageous.
• Understands a programming language's limitations to exploit the language behavior to the fullest potential.
• Understanding of accessibility and security compliances
• Ability to communicate complex technical concepts to both technical and non- technical audiences with ease
• Diversity in skills like version control tools, CI/CD, cloud basics, good debugging skills, and test-driven development will help your profile stand out.

Job Description
• Role: Quality Assurance Engineer – Automation (3–4 yrs)
• Location: Bengaluru
• Type: Full-time
Why this role? Join a fast-moving team that’s pushing test automation into the AI era. You’ll own end-to-end quality for web, mobile and API layers, combining Playwright (or similar) with next-gen, AI-driven test platforms to deliver smarter, faster releases.
What you’ll do
• Build & maintain automation with Playwright, Selenium, Cypress or equivalent
• Super-charge coverage using AI-powered tools
• Create, run and optimize manual, API (Postman/Rest Assured) and database (SQL) tests
• Triage results, file defects in Jira, and drive them to closure What you bring
• 3–4 years’ hands-on automation experience
• Strong with Playwright (preferred) or Selenium/Cypress and one scripting language (JS/TS, Python or Java)
• Familiarity with AI-based testing platforms
• Solid API testing & SQL skills; sound grasp of STLC and defect management
• Clear communicator with sharp analytical instincts
• Nice to have: BDD (Cucumber/SpecFlow), performance testing (JMeter/LoadRunner), TestRail/Zephyr, ML model validation Qualifications Bachelor’s in Computer Science, Engineering or related field What’s in it for you?
• Hands-on exposure to cutting-edge AI test automation
• Ownership and room to innovate in a collaborative, high-impact environment
• Competitive pay, flexible policies and a fun teaM

Job Title: Ui Path Developer
Experience: 5 to 8 years
Location: Bangalore
Notice Period: Immediate to 15 days
Key Responsibilities:
1. Develop and implement automation solutions using UiPath.
2. Design, develop, test, and deploy RPA bots for process automation.
3. Write and optimize SQL queries, including joins, to manage and manipulate data effectively.
4. Develop scripts using Python, VB, .NET, or JavaScript to enhance automation capabilities.
5. Work with business stakeholders to analyze and optimize automation workflows.
6. Troubleshoot and resolve issues in RPA processes and scripts.
7. Ensure adherence to best practices in automation development and deployment.
Required Skills & Experience:
1. 5-8years of experience in RPA development with UiPath.
2. Strong expertise in SQL, including writing complex queries and joins.
3. Hands-on experience with at least one scripting language: Python, VB, .NET, or JavaScript.
4. Understanding of RPA best practices, exception handling, and performance optimization.
5. Experience integrating UiPath with databases and other applications.
6. Strong problem-solving and analytical skills.
Our Client details
🔍 Who We Are:
Join Leading Healthcare, a U.S.-based product company transforming the $1.1T health insurance space. From our growing Bangalore tech hub, we power systems that support 81M+ lives and process millions of health claims daily.
🚨 Now Hiring: Senior Java Technical Support Engineer 🚨
📍 Location: Bangalore | Hybrid | Onsite role
💼 Experience: 4–8 Years | 🕒 Immediate to 30 Days
Are you the Java Support Rockstar we’re looking for? 🎸
Join our team in Bangalore to solve real-world problems in the healthcare domain!
Your Mission:
🛠 Development | 🧠 Troubleshooting | 🔍 Analysis | 🧪 Testing
🌐 Java | J2EE | REST APIs | WebLogic/WebSphere | Oracle RDBMS
🐧 Linux Scripting | 🧵 Multi-threading & debugging | ♻️ Garbage Collection | Solid SQL | Concurrency, GC, serialization |Strong Scripting skills in Shell/Unix
You’ll be working on:
🩺 Critical Healthcare Systems
📞 Supporting enterprise customers
🔧 Debugging, testing, enhancing platforms
🌍 Collaborating with global teams
🌐 Work across REST APIs, WebLogic, RDBMS, Linux, Unix, Shell and more
📈 Identify & solve performance bottlenecks
🧠 You’ll Analyze. Troubleshoot. Test. Develop.
👥 Lead teams, support enterprise apps, and build rock-solid systems.
Why You Should Apply:
✅ Leadership opportunities
✅ Cross-functional exposure with US teams
✅ Huge tech learning curve
✅ Hybrid work with global impact

- A bachelor’s degree in Computer Science or a related field.
- 5-7 years of experience working as a hands-on developer in Sybase, DB2, ETL technologies.
- Worked extensively on data integration, designing, and developing reusable interfaces Advanced experience in Python, DB2, Sybase, shell scripting, Unix, Perl scripting, DB platforms, database design and modeling.
- Expert level understanding of data warehouse, core database concepts and relational database design.
- Experience in writing stored procedures, optimization, and performance tuning Strong Technology acumen and a deep strategic mindset.
- Proven track record of delivering results
- Proven analytical skills and experience making decisions based on hard and soft data
- A desire and openness to learning and continuous improvement, both of yourself and your team members.
- Hands-on experience on development of APIs is a plus
- Good to have experience with Business Intelligence tools, Source to Pay applications such as SAP Ariba, and Accounts Payable system Skills Required
- Familiarity with Postgres and Python is a plus
Job Summary:
Seeking a seasoned SQL + ETL Developer with 4+ years of experience in managing large-scale datasets and cloud-based data pipelines. The ideal candidate is hands-on with MySQL, PySpark, AWS Glue, and ETL workflows, with proven expertise in AWS migration and performance optimization.
Key Responsibilities:
- Develop and optimize complex SQL queries and stored procedures to handle large datasets (100+ million records).
- Build and maintain scalable ETL pipelines using AWS Glue and PySpark.
- Work on data migration tasks in AWS environments.
- Monitor and improve database performance; automate key performance indicators and reports.
- Collaborate with cross-functional teams to support data integration and delivery requirements.
- Write shell scripts for automation and manage ETL jobs efficiently.
Required Skills:
- Strong experience with MySQL, complex SQL queries, and stored procedures.
- Hands-on experience with AWS Glue, PySpark, and ETL processes.
- Good understanding of AWS ecosystem and migration strategies.
- Proficiency in shell scripting.
- Strong communication and collaboration skills.
Nice to Have:
- Working knowledge of Python.
- Experience with AWS RDS.

Profile: AWS Data Engineer
Mode- Hybrid
Experience- 5+7 years
Locations - Bengaluru, Pune, Chennai, Mumbai, Gurugram
Roles and Responsibilities
- Design and maintain ETL pipelines using AWS Glue and Python/PySpark
- Optimize SQL queries for Redshift and Athena
- Develop Lambda functions for serverless data processing
- Configure AWS DMS for database migration and replication
- Implement infrastructure as code with CloudFormation
- Build optimized data models for performance
- Manage RDS databases and AWS service integrations
- Troubleshoot and improve data processing efficiency
- Gather requirements from business stakeholders
- Implement data quality checks and validation
- Document data pipelines and architecture
- Monitor workflows and implement alerting
- Keep current with AWS services and best practices
Required Technical Expertise:
- Python/PySpark for data processing
- AWS Glue for ETL operations
- Redshift and Athena for data querying
- AWS Lambda and serverless architecture
- AWS DMS and RDS management
- CloudFormation for infrastructure
- SQL optimization and performance tuning

About the Company:
Gruve is an innovative Software Services startup dedicated to empowering Enterprise Customers in managing their Data Life Cycle. We specialize in Cyber Security, Customer Experience, Infrastructure, and advanced technologies such as Machine Learning and Artificial Intelligence. Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks.
Why Gruve:
At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you.
Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.
Position summary:
We are seeking a Senior Software Development Engineer – Data Engineering with 5-8 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions.
Key Roles & Responsibilities:
- Design, develop, and optimize ETL/ELT pipelines using Apache Spark, PySpark, Databricks, and Snowflake.
- Implement real-time and batch data processing workflows in cloud environments (AWS, Azure, GCP).
- Develop high-performance, scalable data pipelines for structured, semi-structured, and unstructured data.
- Work with Delta Lake and Lakehouse architectures to improve data reliability and efficiency.
- Optimize Snowflake and Databricks performance, including query tuning, caching, partitioning, and cost optimization.
- Implement data governance, security, and compliance best practices.
- Build and maintain data models, transformations, and data marts for analytics and reporting.
- Collaborate with data scientists, analysts, and business teams to define data engineering requirements.
- Automate infrastructure and deployments using Terraform, Airflow, or dbt.
- Monitor and troubleshoot data pipeline failures, performance issues, and bottlenecks.
- Develop and enforce data quality and observability frameworks using Great Expectations, Monte Carlo, or similar tools.
Basic Qualifications:
- Bachelor’s or Master’s Degree in Computer Science or Data Science.
- 5–8 years of experience in data engineering, big data processing, and cloud-based data platforms.
- Hands-on expertise in Apache Spark, PySpark, and distributed computing frameworks.
- Strong experience with Snowflake (Warehouses, Streams, Tasks, Snowpipe, Query Optimization).
- Experience in Databricks (Delta Lake, MLflow, SQL Analytics, Photon Engine).
- Proficiency in SQL, Python, or Scala for data transformation and analytics.
- Experience working with data lake architectures and storage formats (Parquet, Avro, ORC, Iceberg).
- Hands-on experience with cloud data services (AWS Redshift, Azure Synapse, Google BigQuery).
- Experience in workflow orchestration tools like Apache Airflow, Prefect, or Dagster.
- Strong understanding of data governance, access control, and encryption strategies.
- Experience with CI/CD for data pipelines using GitOps, Terraform, dbt, or similar technologies.
Preferred Qualifications:
- Knowledge of streaming data processing (Apache Kafka, Flink, Kinesis, Pub/Sub).
- Experience in BI and analytics tools (Tableau, Power BI, Looker).
- Familiarity with data observability tools (Monte Carlo, Great Expectations).
- Experience with machine learning feature engineering pipelines in Databricks.
- Contributions to open-source data engineering projects.

Job Overview:
We are seeking an experienced AWS Data Engineer to join our growing data team. The ideal candidate will have hands-on experience with AWS Glue, Redshift, PySpark, and other AWS services to build robust, scalable data pipelines. This role is perfect for someone passionate about data engineering, automation, and cloud-native development.
Key Responsibilities:
- Design, build, and maintain scalable and efficient ETL pipelines using AWS Glue, PySpark, and related tools.
- Integrate data from diverse sources and ensure its quality, consistency, and reliability.
- Work with large datasets in structured and semi-structured formats across cloud-based data lakes and warehouses.
- Optimize and maintain data infrastructure, including Amazon Redshift, for high performance.
- Collaborate with data analysts, data scientists, and product teams to understand data requirements and deliver solutions.
- Automate data validation, transformation, and loading processes to support real-time and batch data processing.
- Monitor and troubleshoot data pipeline issues and ensure smooth operations in production environments.
Required Skills:
- 5 to 7 years of hands-on experience in data engineering roles.
- Strong proficiency in Python and PySpark for data transformation and scripting.
- Deep understanding and practical experience with AWS Glue, AWS Redshift, S3, and other AWS data services.
- Solid understanding of SQL and database optimization techniques.
- Experience working with large-scale data pipelines and high-volume data environments.
- Good knowledge of data modeling, warehousing, and performance tuning.
Preferred/Good to Have:
- Experience with workflow orchestration tools like Airflow or Step Functions.
- Familiarity with CI/CD for data pipelines.
- Knowledge of data governance and security best practices on AWS.
Role - ETL Developer
Work Mode - Hybrid
Experience- 4+ years
Location - Pune, Gurgaon, Bengaluru, Mumbai
Required Skills - AWS, AWS Glue, Pyspark, ETL, SQL
Required Skills:
- 4+ years of hands-on experience in MySQL, including SQL queries and procedure development
- Experience in Pyspark, AWS, AWS Glue
- Experience in AWS ,Migration
- Experience with automated scripting and tracking KPIs/metrics for database performance
- Proficiency in shell scripting and ETL.
- Strong communication skills and a collaborative team player
- Knowledge of Python and AWS RDS is a plus
Role : Java Developer
Location : Bangalore
Key responsibilities
- Experience – 3 to 8 years of experience.
- Experience in Core Java and Spring Boot.
- Extensive experience in developing enterprise-scale applications and systems. Should possess good architectural knowledge and be aware of enterprise application design patterns.
- Should have the ability to analyze, design, develop and test complex, low-latency client facing applications.
- Good development experience with RDBMS
- Good knowledge of multi-threading and high-performance server-side development.
- Basic working knowledge of Unix/Linux.
- Excellent problem solving and coding skills.
- Strong interpersonal, communication and analytical skills.
- Should have the ability to express their design ideas and thoughts.
About Wissen Technology:
The Wissen Group was founded in the year 2000. Wissen Technology, a part of Wissen Group, was established in the year 2015. Wissen Technology is a specialized technology company that delivers high-end consulting for organizations in the Banking & Finance, Telecom, and Healthcare domains. We help clients build world class products.
Our workforce consists of highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world. Wissen Technology has grown its revenues by 400% in these five years without any external funding or investments. Globally present with offices US, India, UK, Australia, Mexico, and Canada.
We offer an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation.
We have been certified as a Great Place to Work® company for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider.
Over the years, Wissen Group has successfully delivered $650 million worth of projects for more than 20 of the Fortune 500 companies. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’.
The technology and thought leadership that the company commands in the industry is the direct result of the kind of people Wissen has been able to attract. Wissen is committed to providing them the best possible opportunities and careers, which extends to providing the best possible experience and value to our clients. We have served clients across sectors like Banking, Ecommerce, Telecom, Healthcare, Manufacturing, and Energy.
Career Progression:
At Wissen Technology, your career growth is important for us. Therefore, we put in several efforts towards each employee’s career progression – to enable and empower them to grow within the company as well as to instill a sense of responsibility, loyalty, and trust.
There have been instances where a software engineer has grown from being an individual contributor to technical lead and now on the path to becoming a director responsible for growing revenues and profitability. We deeply value Ownership: taking responsibility, making it happen, not letting the ball drop, and being accountable.
Job Title: Java Developer
Java Developer – Job Description Wissen Technology is now hiring for a Java Developer - Bangalore with hands-on experience in Core Java, algorithms, data structures, multithreading and SQL. We are solving complex technical problems in the industry and need talented software engineers to join our mission and be a part of a global software development team. A brilliant opportunity to become a part of a highly motivated and expert team which has made a mark as a high-end technical consulting.
Required Skills: • Exp. - 4 to 14 years.
• Experience in Core Java and Spring Boot.
• Extensive experience in developing enterprise-scale applications and systems. Should possess good architectural knowledge and be aware of enterprise application design patterns.
• Should have the ability to analyze, design, develop and test complex, low-latency clientfacing applications.
• Good development experience with RDBMS.
• Good knowledge of multi-threading and high-performance server-side development.
• Basic working knowledge of Unix/Linux.
• Excellent problem solving and coding skills.
• Strong interpersonal, communication and analytical skills.
• Should have the ability to express their design ideas and thoughts.
Hello Everyone,
Job Description Wissen Technology is now hiring for a Java Developer - Bangalore with hands-on experience in Core Java, algorithms, data structures, multithreading and SQL. We are solving complex technical problems in the industry and need talented software engineers to join our mission and be a part of a global software development team. A brilliant opportunity to become a part of a highly motivated and expert team which has made a mark as a high-end technical consulting.
Required Skills: • Exp. - 5 to 14 years.
• Experience in Core Java and Spring Boot.
• Extensive experience in developing enterprise-scale applications and systems. Should possess good architectural knowledge and be aware of enterprise application design patterns.
• Should have the ability to analyze, design, develop and test complex, low-latency clientfacing applications.
• Good development experience with RDBMS.
• Good knowledge of multi-threading and high-performance server-side development.
• Basic working knowledge of Unix/Linux.
• Excellent problem solving and coding skills.
• Strong interpersonal, communication and analytical skills.
• Should have the ability to express their design ideas and thoughts.
About Wissen Technology:
Wissen Technology is a niche global consulting and solutions company that brings unparalleled domain expertise in Banking and Finance, Telecom and Startups. Wissen Technology is a part of Wissen Group and was established in the year 2015. Wissen has offices in the US, India, UK, Australia, Mexico, and Canada, with best-in-class infrastructure and development facilities. Wissen hassuccessfully delivered projects worth $1 Billion for more than 25 of the Fortune 500 companies. The Wissen Group overall includes more than 4000 highly skilled professionals. Wissen Technology provides exceptional value in mission critical projects for its clients, through thought leadership, ownership, and assured on-time deliveries that are always ‘first time right’. Our team consists of 1200+ highly skilled professionals, with leadership and senior management executives who have graduated from Ivy League Universities like Wharton, MIT, IITs, IIMs, and NITs and with rich work experience in some of the biggest companies in the world. Wissen Technology offers an array of services including Application Development, Artificial Intelligence & Machine Learning, Big Data & Analytics, Visualization & Business Intelligence, Robotic Process Automation, Cloud, Mobility, Agile & DevOps, Quality Assurance & Test Automation. We have been certified as a Great Place to Work® for two consecutive years (2020-2022) and voted as the Top 20 AI/ML vendor by CIO Insider.

Job Description:
Wissen Technology is looking for a skilled Automation Anywhere Engineer to join our dynamic team in Bangalore. The ideal candidate will have hands-on experience in Automation Anywhere , Document Automation , SQL , and Python , with a strong background in designing and implementing automation solutions.
Key Responsibilities:
- Design, develop, and deploy automation solutions using Automation Anywhere.
- Work on Document Automation to extract, process, and validate structured/unstructured data.
- Develop scripts and automation solutions using Python for enhanced process efficiency.
- Optimize data processing workflows and database queries using SQL.
- Collaborate with cross-functional teams to identify automation opportunities and enhance business processes.
- Perform unit testing, debugging, and troubleshooting of automation scripts.
- Ensure adherence to industry best practices and compliance standards in automation processes.
- Provide support, maintenance, and enhancements to existing automation solutions.
Required Skills & Qualifications:
- 4 to 8 years of experience in RPA development using Automation Anywhere.
- Strong expertise in Automation Anywhere A360(preferable).
- Hands-on experience with Document Automation tools and technologies.
- Proficiency in Python for scripting and automation.
- Strong knowledge of SQL for data processing and querying.
- Experience in troubleshooting, debugging, and optimizing automation workflows.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Excellent problem-solving skills and attention to detail.

Responsibilities:
- Develop and maintain high-quality, efficient, and scalable backend applications.
- Participate in all phases of the software development lifecycle (SDLC)
- Write clean, well-documented, and testable code adhering to best practices.
- Collaborate with team members to ensure the successful delivery of projects.
- Debug and troubleshoot complex technical problems.
- Identify and implement performance optimizations.
- Participate in code reviews
- Hands-on experience with Springboot, Java 8 and above.
- 5-8 years of experience developing Java applications.
- Knowledge about at least one messaging system like Kafka, RabbitMQ etc.
- Required React developer requirements, qualifications & skills:
- Proficiency in React.js and its core principles
- Strong JavaScript, HTML5, and CSS3 skills
- Experience with popular React.js workflows (such as Redux)
- Strong understanding of object-oriented programming (OOP) principles.
- Experience with design patterns and best practices for Java development.
- Proficient in unit testing frameworks (e.g., JUnit).
- Experience with build automation tools (e.g., Maven, Gradle).
- Experience with version control systems (e.g., Git).
- Experience with one of these databases – Postgres, MongoDb, Cassandra
- Knowledge on Retail or OMS is a plus.
- Experienced in containerized deployments using Docker, Kubernetes and DevOps mindset
- Ability to reverse engineer existing/legacy and document findings on confluence.
- Create automated tests for unit, integration, regression, performance, and functional testing, to meet established expectations and acceptance criteria.
- Document APIs using Lowe’s established tooling.
Job Title: Backend Developer
Location: In-Office, Bangalore, Karnataka, India
Job Summary:
We are seeking a highly skilled and experienced Backend Developer with a minimum of 1 year of experience in product building to join our dynamic and innovative team. In this role, you will be responsible for designing, developing, and maintaining robust backend systems that drive our applications. You will collaborate with cross-functional teams to ensure seamless integration between frontend and backend components, and your expertise will be critical in architecting scalable, secure, and high-performance backend solutions.
Annual Compensation: 6-10 LPA
Responsibilities:
- Design, develop, and maintain scalable and efficient backend systems and APIs using NodeJS.
- Architect and implement complex backend solutions, ensuring high availability and performance.
- Collaborate with product managers, frontend developers, and other stakeholders to deliver comprehensive end-to-end solutions.
- Design and optimize data storage solutions using relational databases (e.g., MySQL) and NoSQL databases (e.g., MongoDB, Redis).
- Promoting a culture of collaboration, knowledge sharing, and continuous improvement.
- Implement and enforce best practices for code quality, security, and performance optimization.
- Develop and maintain CI/CD pipelines to automate build, test, and deployment processes.
- Ensure comprehensive test coverage, including unit testing, and implement various testing methodologies and tools to validate application functionality.
- Utilize cloud services (e.g., AWS, Azure, GCP) for infrastructure deployment, management, and optimization.
- Conduct system design reviews and contribute to architectural discussions.
- Stay updated with industry trends and emerging technologies to drive innovation within the team.
- Implement secure authentication and authorization mechanisms and ensure data encryption for sensitive information.
- Design and develop event-driven applications utilizing serverless computing principles to enhance scalability and efficiency.
Requirements:
- Minimum of 1 year of proven experience as a Backend Developer, with a strong portfolio of product-building projects.
- Extensive experience with JavaScript backend frameworks (e.g., Express, Socket) and a deep understanding of their ecosystems.
- Strong expertise in SQL and NoSQL databases (MySQL and MongoDB) with a focus on data modeling and scalability.
- Practical experience with Redis and caching mechanisms to enhance application performance.
- Proficient in RESTful API design and development, with a strong understanding of API security best practices.
- In-depth knowledge of asynchronous programming and event-driven architecture.
- Familiarity with the entire web stack, including protocols, web server optimization techniques, and performance tuning.
- Experience with containerization and orchestration technologies (e.g., Docker, Kubernetes) is highly desirable.
- Proven experience working with cloud technologies (AWS/GCP/Azure) and understanding of cloud architecture principles.
- Strong understanding of fundamental design principles behind scalable applications and microservices architecture.
- Excellent problem-solving, analytical, and communication skills.
- Ability to work collaboratively in a fast-paced, agile environment and lead projects to successful completion.


Senior Data Engineer
Location: Bangalore, Gurugram (Hybrid)
Experience: 4-8 Years
Type: Full Time | Permanent
Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities:
PostgreSQL & Data Modeling
· Design and optimize complex SQL queries, stored procedures, and indexes
· Perform performance tuning and query plan analysis
· Contribute to schema design and data normalization
Data Migration & Transformation
· Migrate data from multiple sources to cloud or ODS platforms
· Design schema mapping and implement transformation logic
· Ensure consistency, integrity, and accuracy in migrated data
Python Scripting for Data Engineering
· Build automation scripts for data ingestion, cleansing, and transformation
· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
· Maintain reusable script modules for operational pipelines
Data Orchestration with Apache Airflow
· Develop and manage DAGs for batch/stream workflows
· Implement retries, task dependencies, notifications, and failure handling
· Integrate Airflow with cloud services, data lakes, and data warehouses
Cloud Platforms (AWS / Azure / GCP)
· Manage data storage (S3, GCS, Blob), compute services, and data pipelines
· Set up permissions, IAM roles, encryption, and logging for security
· Monitor and optimize cost and performance of cloud-based data operations
Data Marts & Analytics Layer
· Design and manage data marts using dimensional models
· Build star/snowflake schemas to support BI and self-serve analytics
· Enable incremental load strategies and partitioning
Modern Data Stack Integration
· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
· Support modular pipeline design and metadata-driven frameworks
· Ensure high availability and scalability of the stack
BI & Reporting Tools (Power BI / Superset / Supertech)
· Collaborate with BI teams to design datasets and optimize queries
· Support development of dashboards and reporting layers
· Manage access, data refreshes, and performance for BI tools
Required Skills & Qualifications:
· 4–6 years of hands-on experience in data engineering roles
· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
· Advanced Python scripting skills for automation and ETL
· Proven experience with Apache Airflow (custom DAGs, error handling)
· Solid understanding of cloud architecture (especially AWS)
· Experience with data marts and dimensional data modeling
· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
· Version control (Git) and CI/CD pipeline knowledge is a plus
· Excellent problem-solving and communication skills
🔥 High Priority – Senior Lead Java Developer (10+ Years) | Bangalore – Onsite
Summary :
We are hiring Senior Lead Java Developers with 10+ years of experience for an onsite role in Bangalore.
If you're a hands-on expert with a strong background in Java, Spring Boot, Microservices, and Kubernetes, this is your opportunity to lead, mentor, and deliver high-quality solutions in a fast-paced environment.
🔹 Position : Senior Lead Java Developer
🔹 Experience : 10+ Years (12+ preferred)
🔹 Location : Bangalore (Onsite)
🔹 Openings : 6+
✅ Must-Have Skills :
- 8+ years of hands-on experience with Core Java & Spring Boot
- Expertise in Multithreading, Dependency Injection, and AOP
- Strong in Microservices Architecture and RESTful services
- Good exposure to SQL & NoSQL databases
- Proficient with Git (GitLab preferred)
- Experience with Kubernetes deployments and APM tools (New Relic preferred)
- Solid understanding of distributed tracing and log analysis
- Proven debugging and performance optimization skills
💼 Responsibilities :
- Design and develop high-quality, scalable microservices
- Act as SME for multiple services or subsystems
- Own service performance, SLAs, and incident resolutions
- Mentor junior developers and conduct technical interviews
- Participate in production war rooms and troubleshooting
- Lead development efforts and drive code quality
🎓 Qualification :
- BE/B.Tech or equivalent degree
- 8-10 years of experience in ETL Testing, Snowflake, DWH Concepts.
- Strong SQL knowledge & debugging skills are a must.
- Experience in Azure and Snowflake Testing is plus
- Experience with Qlik Replicate and Compose tools (Change Data Capture) tools is considered a plus
- Strong Data warehousing Concepts, ETL tools like Talend Cloud Data Integration, Pentaho/Kettle tool
- Experience in JIRA, Xray defect management tool is good to have.
- Exposure to the financial domain knowledge is considered a plus.
- Testing the data-readiness (data quality) address code or data issues
- Demonstrated ability to rationalize problems and use judgment and innovation to define clear and concise solutions
- Demonstrate strong collaborative experience across regions (APAC, EMEA and NA) to effectively and efficiently identify root cause of code/data issues and come up with a permanent solution
- Prior experience with State Street and Charles River Development (CRD) considered a plus
- Experience in tools such as PowerPoint, Excel, SQL
- Exposure to Third party data providers such as Bloomberg, Reuters, MSCI and other Rating agencies is a plus