
Must Have Skills:
- Good experience in Pyspark - Including Dataframe core functions and Spark SQL
- Good experience in SQL DBs - Be able to write queries including fair complexity.
- Should have excellent experience in Big Data programming for data transformation and aggregations
- Good at ELT architecture. Business rules processing and data extraction from Data Lake into data streams for business consumption.
- Good customer communication.
- Good Analytical skills
Technology Skills (Good to Have):
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub.
- Experience in migrating on-premise data warehouses to data platforms on AZURE cloud.
- Designing and implementing data engineering, ingestion, and transformation functions
- Azure Synapse or Azure SQL data warehouse
- Spark on Azure is available in HD insights and data bricks

Similar jobs
Strong Snowflake Data Architect profile (Cloud Data Platform / AI-led Data Transformation)
Mandatory (Experience 1) – Must have 8+ years of experience in Data Engineering / Data Architecture, with strong focus on building enterprise-scale data platforms
Mandatory (Experience 2) – Must have 3+ years of deep hands-on experience in Snowflake architecture, including designing and implementing scalable data warehouse solutions
Mandatory (Experience 3) – Strong expertise in Snowflake features including Resource Monitors, RBAC, Virtual Warehouses, Time Travel, Zero Copy Clone, and query performance optimization
Mandatory (Experience 4) – Proven experience building and managing data ingestion pipelines using Snowpipe, handling structured, semi-structured (JSON, XML), and columnar data formats (Parquet)
Mandatory (Experience 5) – Strong experience in cloud ecosystem, preferably AWS, including S3, Lambda, EC2, Redshift, and integration with Snowflake-based architectures
Mandatory (Experience 6) – Proven experience in migrating data from on-premise or legacy systems to Snowflake, including data modeling, transformation, and validation
Mandatory (Experience 7) – Hands-on experience in SQL, SnowSQL, Python, or PySpark for data transformation, automation, and monitoring
Mandatory (Experience 8) – Experience in data modeling, partitioning, micro-partitions, and re-clustering strategies in Snowflake
Mandatory (Experience 9) – Must have experience working in client-facing or consulting roles, including requirement gathering, solution design, and stakeholder communication
Mandatory (Skill 1) – Strong understanding of end-to-end data architecture including ETL/ELT pipelines, data lakes, and warehouse integration
Mandatory (Skill 2) – Experience in designing monitoring and automation frameworks using Python, Bash, or similar tools
Mandatory (Skill 3) – Ability to translate business requirements into scalable technical solutions and define future-state data architecture roadmaps
Mandatory (Note) – Only immediate joiners or candidates who can join within 15 days
JOB DESCRIPTION: BACKEND DEVELOPER
Do you want to work in a fast growing company by doing meaningful work and have fun doing it? Remitbee, a FinTech with headquarters in Canada and office in Chennai is seeking a skilled Backend developer with experience in Node JS. Individuals who apply for Remitbee careers should be passionate about tech and driven towards innovating the industry further with the Remitbee team. This position will be based out of Chennai or Remote.
This position also comes with the opportunity for career growth and working hour flexibility. We look forward to reading your application. At least 3+ years of experience in backend technologies like Node.js, Express, Sequelize
What will you do?
- Work in an agile team of developers, QA, DevOps and founders
- Implement new systems and redesign legacy systems, using leading technologies, to support advancing Research business requirements
- Research and analyze business and system needs. Explore solution options to recommend designs and technologies
- Writing test cases
Skills and requirements:
- At least 3 years of experience in backend technologies like Node.js, Express, Sequelize
- Experience with automated task runners such as Grunt or Gulp
- Experience in Database like MySQL and/or PostgreSQL
- Comfortable applying engineering best practices for Test Driven Development, integration testing version control, release management, work estimation and planning
- Experience to work with Rest and GraphQL APIs
- You know how to use Git,
- You are passionate about code quality. Writing tests and documentation belong to your natural workflow.
- Participate in or lead all parts of the software development lifecycle, including analysis, design, programming, testing, implementation, and support.
- A history of active contributions to open source projects
Type: Full time
Location: Chennai
Experience: 3+yrs (Required)
Notice Period: 0-30 days (Preferred)
Work timing: 9 AM - 6PM IST
Java Developer - Microservices ( 4+ Yrs) - Mumbai & Gurgaon
Java Developer - Angular ( 3+ Yrs) - Mumbai & Gurgaon
Must have 4+ years of experience in Enterprise Java 8 and above
Strong in Core Java (Collections, Threads, Regular Expressions, concurrency, Lambdas, Reactive, Exception handling).
Strong experience in microservices and event driven processing systems.
Experience with architecting and implementing apps using Spring Boot, Spring Cloud including Spring MVC, Spring Boot, Spring JDBC, and Spring Cloud.
Good knowledge on relational database (Oracle) or NO SQL database is preferred.
Experience in writing & automating test scripts using Mockito/JUnit, SpringbootTest etc.
Must be capable of doing code reviews and mentor the junior developers to drive towards high quality deliverables.
Strong background culture of delivering projects with first time right / Zero defects in Production.
Very good analytical, problem solving ability, verbal, and written communication skills.
- 3+ years’ experience in manual and automated QA for Web, iOS, and Android
- Analyzing user’s stories and/use cases/requirements for validity and feasibility
- collaborate closely with other team members and departments
- Can be able to write all test cases independently
- execute all levels of testing (System, Integration, Regression, and Security)
- Be proactive in identifying issues, deep dive, troubleshoot, communicate to stakeholders and escalation of issues and providing status reports
- Sense of ownership and drive and a willingness to accept the challenge of daily deadlines is essential
- Design and develop automation scripts when needed
We are looking for an outstanding Big Data Engineer with experience setting up and maintaining Data Warehouse and Data Lakes for an Organization. This role would closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Roles and Responsibilities:
- Develop and maintain scalable data pipelines and build out new integrations and processes required for optimal extraction, transformation, and loading of data from a wide variety of data sources using 'Big Data' technologies.
- Develop programs in Scala and Python as part of data cleaning and processing.
- Assemble large, complex data sets that meet functional / non-functional business requirements and fostering data-driven decision making across the organization.
- Responsible to design and develop distributed, high volume, high velocity multi-threaded event processing systems.
- Implement processes and systems to validate data, monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it.
- Perform root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Provide high operational excellence guaranteeing high availability and platform stability.
- Closely collaborate with the Data Science team and assist the team build and deploy machine learning and deep learning models on big data analytics platforms.
Skills:
- Experience with Big Data pipeline, Big Data analytics, Data warehousing.
- Experience with SQL/No-SQL, schema design and dimensional data modeling.
- Strong understanding of Hadoop Architecture, HDFS ecosystem and eexperience with Big Data technology stack such as HBase, Hadoop, Hive, MapReduce.
- Experience in designing systems that process structured as well as unstructured data at large scale.
- Experience in AWS/Spark/Java/Scala/Python development.
- Should have Strong skills in PySpark (Python & SPARK). Ability to create, manage and manipulate Spark Dataframes. Expertise in Spark query tuning and performance optimization.
- Experience in developing efficient software code/frameworks for multiple use cases leveraging Python and big data technologies.
- Prior exposure to streaming data sources such as Kafka.
- Should have knowledge on Shell Scripting and Python scripting.
- High proficiency in database skills (e.g., Complex SQL), for data preparation, cleaning, and data wrangling/munging, with the ability to write advanced queries and create stored procedures.
- Experience with NoSQL databases such as Cassandra / MongoDB.
- Solid experience in all phases of Software Development Lifecycle - plan, design, develop, test, release, maintain and support, decommission.
- Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development).
- Experience building and deploying applications on on-premise and cloud-based infrastructure.
- Having a good understanding of machine learning landscape and concepts.
Qualifications and Experience:
Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Big Data Engineer or a similar role for 3-5 years.
Certifications:
Good to have at least one of the Certifications listed here:
AZ 900 - Azure Fundamentals
DP 200, DP 201, DP 203, AZ 204 - Data Engineering
AZ 400 - Devops Certification
- Responsible to create and manage iOS Application for both iPhone and iPad using APIs, Third Party SDKs
Required – Swift (MVVM), Objective-C
• Experience of MEAN/MERN Stack Technology.
• Experience of Node JS, Express JS, Mongo DB, Angular or Angular JS/Reacts JS.
• Knowledge in Java and/or Python programming, will be added plus.
• Knowledge of Docker and Kubernetes,will be added plus.
• Knowledge of GIT, Jenkins, Nexus, SonarQube tools (or similar)
• Experience working in Agile (SCRUM)
• Knowledge working with AWS (S3, EMR, EC2, RDS)
• Preference will be given to candidates who have travelled onsite or with US business Visa.









