Cutshort logo

50+ ETL Jobs in India

Apply to 50+ ETL Jobs on CutShort.io. Find your next job, effortlessly. Browse ETL Jobs and apply today!

icon
Branch International

at Branch International

4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
7yrs+
₹50L - ₹70L / yr
Data Structures
Algorithms
Object Oriented Programming (OOPs)
ETL
ETL architecture
+5 more

Branch Overview


Imagine a world where every person has improved access to financial services. People could start new businesses, pay for their children’s education, cover their emergency medical bills – the possibilities to improve life are endless. 


Branch is a global technology company revolutionizing financial access for millions of underserved banking customers today across Africa and India. By leveraging the rapid adoption of smartphones, machine learning and other technology, Branch is pioneering new ways to improve access and value for those overlooked by banks. From instant loans to market-leading investment yields, Branch offers a variety of products that help our customers be financially empowered.


Branch’s mission-driven team is led by the co-founders of Kiva.org and one of the earliest product leaders of PayPal. Branch has raised over $100 million from leading Silicon Valley investors, including Andreessen Horowitz (a16z) and Visa. 

 

With over 32 million downloads, Branch is one of the most popular finance apps in the world.

 

Job Overview

Branch launched in India in early 2019 and has seen rapid adoption and growth. In 2020 we started building out a full Engineering team in India to accelerate our success here. This team is working closely with our engineering team (based in the United States, Nigeria, and Kenya) to strengthen the capabilities of our existing product and build out new product lines for the company.


You will work closely with our Product and Data Science teams to design and maintain multiple technologies, including our API backend, credit scoring and underwriting systems, payments integrations, and operations tools. We face numerous interesting technical challenges ranging from maintaining complex financial systems to accessing and processing creative data sources for our algorithmic credit model. 


As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making: We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting.


Responsibilities

  • Make significant contributions to Branch’s data platform including data models, transformations, warehousing, and BI systems by bringing in best practices.
  • Build customer facing and internal products and APIs with industry best practices around security and performance in mind.
  • Influence and shape the company’s technical and product roadmap by providing timely and accurate inputs and owning various outcomes.
  • Collaborate with peers in other functional areas (Machine Learning, DevOps, etc.) to identify potential growth areas and systems needed.
  • Guide and mentor other younger engineers around you.
  • Scale our systems to ever-growing levels of traffic and handle complexity.


Qualifications

  • You have strong experience (8+ years) of designing, coding, and shipping data and backend software for web-based or mobile products.
  • Experience coordinating and collaborating with various business stakeholders and company leadership on critical functional decisions and technical roadmap.
  • You have strong knowledge of software development fundamentals, including relevant background in computer science fundamentals, distributed systems, data storage and processing, and agile development methodologies.
  • Have experience designing maintainable and scalable data architecture for ETL and BI purposes.
  • You are able to utlize your knowledge and expertise to code and ship quality products in a timely manner.
  • You are pragmatic and combine a strong understanding of technology and product needs to arrive at the best solution for a given problem.
  • You are highly entrepreneurial and thrive in taking ownership of your own impact. You take the initiative to solve problems before they arise.
  • You are an excellent collaborator & communicator. You know that startups are a team sport. You listen to others, aren’t afraid to speak your mind and always try to ask the right questions. 
  • You are excited by the prospect of working in a distributed team and company, working with teammates from all over the world.

Benefits of Joining

  • Mission-driven, fast-paced and entrepreneurial environment
  • Competitive salary and equity package
  • A collaborative and flat company culture
  • Remote first, with the option to work in-person occasionally
  • Fully-paid Group Medical Insurance and Personal Accidental Insurance
  • Unlimited paid time off including personal leave, bereavement leave, sick leave
  • Fully paid parental leave - 6 months maternity leave and 3 months paternity leave
  • Monthly WFH stipend alongside a one time home office set-up budget
  • $500 Annual professional development budget 
  • Discretionary trips to our offices across the globe, with global travel medical insurance 
  • Team meals and social events- Virtual and In-person

Branch International is an Equal Opportunity Employer. The company does not and will not discriminate in employment on any basis prohibited by applicable law. We’re looking for more than just qualifications -- so if you’re unsure that you meet the criteria, please do not hesitate to apply!

 

Read more
DataGrokr

at DataGrokr

4 candid answers
5 recruiters
Reshika Mendiratta
Posted by Reshika Mendiratta
Bengaluru (Bangalore)
5yrs+
Upto ₹30L / yr (Varies
)
Data engineering
Python
SQL
ETL
Data Warehouse (DWH)
+12 more

Lightning Job By Cutshort⚡

 

As part of this feature, you can expect status updates about your application and replies within 72 hours (once the screening questions are answered).


About DataGrokr

DataGrokr (https://www.datagrokr.com) is a cloud native technology consulting organization providing the next generation of big data analytics, cloud and enterprise solutions. We solve complex technology problems for our global clients who rely on us for our deep technical knowledge and delivery excellence.

If you are unafraid of technology, believe in your learning ability and are looking to work amongst smart, driven colleagues whom you can look up to and learn from, you might want to check us out. 


About the role

We are looking for a Senior Data Engineer to join our growing engineering team. As a member of the team,

• You will work on enterprise data platforms, architect and implement data lakes both on-prem and in the cloud.

• You will be responsible for evolving technical architecture, design and implementation of data solutions using a variety of big data technologies. You will work extensively on all major public cloud platforms - AWS, Azure and GCP.

• You will work with senior technical architects on our client side to evolve an effective technology architecture and development strategy to implement the solution.

• You will work with extremely talented peers and follow modern engineering practices using agile methodologies.

• You will coach, mentor and lead other engineers and provide guidance to ensure the quality of and consistency of the solution.


Must-have skills and attitudes:

• Passion for data engineering, in-depth knowledge of some of the following technologies – SQL (expert level), Python (expert level), Spark (intermediate level), Big data stack of one of AWS/GCP.

• Hands on experience in data wrangling, data munging and ETL. Should be able to source data from anywhere and transform data to any shape using SQL, Python or Spark.

• Hands on experience working with variable data structures like XML/JSON/AVRO etc

• Ability to create data models and architect data warehouse components

• Experience with Version control (GIT/BIT BUCKET etc)

• Strong understanding of Agile, experience with CI/CD pipelines and processes

• Ability to communicate with technical as well as non-technical audience

• Collaborating with various stakeholders

• Have led scrum teams, participated in Sprint grooming and planning sessions, work / effort sizing and estimation


Desired Skills & Experience:

• At least 5 years of industry experience

• Working knowledge of any of the following - AWS Big Data Stack (S3, Redshift, Athena, Glue, etc.), GCP Big Data Stack (Cloud Storage, Workflow, Dataflow, Cloud Functions, Big Query, Pub Sub, etc.).

• Working knowledge of traditional enterprise data warehouse architectures and migrating them to the Cloud.

• Experience with Data Visualization tool (Tableau / Power BI etc)

• Experience with JIRA / Azure DevOps etc


How will DataGrokr support you in your growth:

• You will be groomed and mentored by senior leaders to take on leadership positions in the company

• You will be actively encouraged to attain certifications, lead technical workshops and conduct meetups to grow your own technology acumen and personal brand

• You will work in an open culture that promotes commitment over compliance, individual responsibility over rules and bringing out the best in everyone.

Read more
Techcronus Business Solutions Pvt. Ltd.
Bhumika Gondaliya
Posted by Bhumika Gondaliya
Ahmedabad, Gujarat
3 - 5 yrs
₹7L - ₹10L / yr
Data Warehouse (DWH)
Informatica
ETL
Data migration
Data integration
+7 more

Role & Responsibilities:


  • Ability to architect Azure cloud-based application modernization, Azure infrastructure setup, configuration and management
  • Ability to create / recreate / rewrite or refactor applications based for cloud resource optimization.
  • Make use of Azure Integration Services: Logic Apps, Service Bus, API Management and Event Grid.
  • Assure that data is cleansed, mapped, transformed, and otherwise optimized for storage and use according to business and technical requirements.
  • Solution design using Microsoft Azure services and tools including Data Factory, Data Lake, Synapse etc.
  • Extracting data, troubleshooting and maintaining the data warehouse.
  • Experience of SQL and Dataverse databases is mandatory.
  • The ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.).
  • Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications.
  • Build data pipelines to collectively bring together data.
  • Utilize Microsoft Azure PaaS and SaaS solution development technologies including Azure Functions, Azure Notifications Hub, Azure App Service and Key Vault
  • Setup Fresh / Modify existing CI/CD pipelines development (YAML or Classic).
  • Hands-on experience with automation tools, cloud computing platforms, and scripting languages
  • Ability to learn and implement automation tools and technologies, such as Azure DevOps, Docker, and Terraform on the Azure platform.
  • Knowledge of containerization and container orchestration, such as Kubernetes
  • Experience with Azure monitoring and error logging tools, debugging skills, problem solving ability.


Read more
Intellikart Ventures LLP
Prajwal Shinde
Posted by Prajwal Shinde
Hyderabad
6 - 11 yrs
₹7L - ₹15L / yr
Python
SQL
Relational Database (RDBMS)
NOSQL Databases
SQL Azure
+3 more

·       Honest, transparent Team player and go-getter.

·       Friendly and positive work culture that's based on respect for individual.

·       Work in large-scale projects with onshore and offshore models.

·       Health care knowledge at least 6+ years of experience.

·       Knowledge of Health care Data domains and supported Data initiatives, like Data Governance, Data modelling and build data driven use cases.

·       Knowledge in understanding Source to Target mapping

·       Work well with Optimizers and help on performance tuning, with SQL server and Azure objects.

·       Has a can-do attitude and has used cutting-edge technologies in real projects. 

·       Support and perform daily tasks around SQL server and ETL.

·       Ability to integrate with Azure and ADF

·       Very process focused and had good knowledge in the Error/Exception handling.

·       Ability to code with modifications history and ability to do through unit test before passing to

·       Ability to review code, trustworthy and is a leader in finishing ETL processes from Start to end without errors.

·       Work in a environment with pair-programming and Agile.

 

Key responsibilities:

 

·       Develop and maintain databases in Azure environment.

·       Build ETL processes with Azure Data Factory, Azure Databricks, Data queries

·       Perform migration processes to or from Azure environment​.

·       Azure Data Engineering: Azure Data Factory, Azure Data Lake, Azure Databricks, Azure Synapse​

·       Developing using Python/Scala and SQL​ server components

·       Work with SQL server on a daily basis and problem solving.

·       Work with RDBMS and NoSQL​

·       Work in the team following Agile practices​.

·       Support exploration and sales of new market trends to ensure growth with interesting projects that leverage the latest technologies​

·       Actively participate in building specialty group skills together with others

Read more
Quinnox

at Quinnox

2 recruiters
MidhunKumar T
Posted by MidhunKumar T
Bengaluru (Bangalore), Mumbai
10 - 15 yrs
₹30L - ₹35L / yr
ADF
azure data lake services
SQL Azure
azure synapse
Spark
+4 more

Mandatory Skills: Azure Data Lake Storage, Azure SQL databases, Azure Synapse, Data Bricks (Pyspark/Spark), Python, SQL, Azure Data Factory.


Good to have: Power BI, Azure IAAS services, Azure Devops, Microsoft Fabric


Ø Very strong understanding on ETL and ELT

Ø Very strong understanding on Lakehouse architecture.

Ø Very strong knowledge in Pyspark and Spark architecture.

Ø Good knowledge in Azure data lake architecture and access controls

Ø Good knowledge in Microsoft Fabric architecture

Ø Good knowledge in Azure SQL databases

Ø Good knowledge in T-SQL

Ø Good knowledge in CI /CD process using Azure devops

Ø Power BI

Read more
Technogen India PvtLtd

at Technogen India PvtLtd

4 recruiters
Mounika G
Posted by Mounika G
Hyderabad
11 - 16 yrs
₹24L - ₹27L / yr
Data Warehouse (DWH)
Informatica
ETL
Amazon Web Services (AWS)
SQL
+1 more

Daily and monthly responsibilities

  • Review and coordinate with business application teams on data delivery requirements.
  • Develop estimation and proposed delivery schedules in coordination with development team.
  • Develop sourcing and data delivery designs.
  • Review data model, metadata and delivery criteria for solution.
  • Review and coordinate with team on test criteria and performance of testing.
  • Contribute to the design, development and completion of project deliverables.
  • Complete in-depth data analysis and contribution to strategic efforts
  • Complete understanding of how we manage data with focus on improvement of how data is sourced and managed across multiple business areas.

 

Basic Qualifications

  • Bachelor’s degree.
  • 5+ years of data analysis working with business data initiatives.
  • Knowledge of Structured Query Language (SQL) and use in data access and analysis.
  • Proficient in data management including data analytical capability.
  • Excellent verbal and written communications also high attention to detail.
  • Experience with Python.
  • Presentation skills in demonstrating system design and data analysis solutions.


Read more
Wissen Technology

at Wissen Technology

4 recruiters
Vijayalakshmi Selvaraj
Posted by Vijayalakshmi Selvaraj
Mumbai
4 - 9 yrs
₹15L - ₹32L / yr
Java
ETL
SQL
Data engineering
Scala

Java/Scala + Data Engineer

 

Experience: 5-10 years

Location: Mumbai

Notice: Immediate to 30 days

Required Skills:

·       5+ years of software development experience.

·       Excellent skills in Java and/or Scala programming, with expertise in backend architectures, messaging technologies, and related frameworks.

·       Developing Data Pipelines (Batch/Streaming). Developing Complex data transformations, ETL Orchestration, Data Migration, Develop and Maintain Datawarehouse / Data Lakes.

·       Extensive experience in complex SQL queries, database development, and data engineering, including the development of procedures, packages, functions, and handling exceptions.

·       Knowledgeable in issue tracking tools (e.g., JIRA), code collaboration tools (e.g., Git/GitLab), and team collaboration tools (e.g., Confluence/Wiki).

·       Proficient in Linux/Unix, including shell scripting.

·       Ability to translate business and architectural features into quality, consistent software design.

·       Solid understanding of programming practices, emphasizing reusable, flexible, and reliable code.

Read more
Piako
PiaKo Store
Posted by PiaKo Store
Kolkata
4 - 8 yrs
₹12L - ₹24L / yr
Python
Amazon Web Services (AWS)
ETL

We are a rapidly expanding global technology partner, that is looking for a highly skilled Senior (Python) Data Engineer to join their exceptional Technology and Development team. The role is in Kolkata. If you are passionate about demonstrating your expertise and thrive on collaborating with a group of talented engineers, then this role was made for you!

At the heart of technology innovation, our client specializes in delivering cutting-edge solutions to clients across a wide array of sectors. With a strategic focus on finance, banking, and corporate verticals, they have earned a stellar reputation for their commitment to excellence in every project they undertake.

We are searching for a senior engineer to strengthen their global projects team. They seek an experienced Senior Data Engineer with a strong background in building Extract, Transform, Load (ETL) processes and a deep understanding of AWS serverless cloud environments.

As a vital member of the data engineering team, you will play a critical role in designing, developing, and maintaining data pipelines that facilitate data ingestion, transformation, and storage for our organization.

Your expertise will contribute to the foundation of our data infrastructure, enabling data-driven decision-making and analytics.

Key Responsibilities:

  • ETL Pipeline Development: Design, develop, and maintain ETL processes using Python, AWS Glue, or other serverless technologies to ingest data from various sources (databases, APIs, files), transform it into a usable format, and load it into data warehouses or data lakes.
  • AWS Serverless Expertise: Leverage AWS services such as AWS Lambda, AWS Step Functions, AWS Glue, AWS S3, and AWS Redshift to build serverless data pipelines that are scalable, reliable, and cost-effective.
  • Data Modeling: Collaborate with data scientists and analysts to understand data requirements and design appropriate data models, ensuring data is structured optimally for analytical purposes.
  • Data Quality Assurance: Implement data validation and quality checks within ETL pipelines to ensure data accuracy, completeness, and consistency.
  • Performance Optimization: Continuously optimize ETL processes for efficiency, performance, and scalability, monitoring and troubleshooting any bottlenecks or issues that may arise.
  • Documentation: Maintain comprehensive documentation of ETL processes, data lineage, and system architecture to ensure knowledge sharing and compliance with best practices.
  • Security and Compliance: Implement data security measures, encryption, and compliance standards (e.g., GDPR, HIPAA) as required for sensitive data handling.
  • Monitoring and Logging: Set up monitoring, alerting, and logging systems to proactively identify and resolve data pipeline issues.
  • Collaboration: Work closely with cross-functional teams, including data scientists, data analysts, software engineers, and business stakeholders, to understand data requirements and deliver solutions.
  • Continuous Learning: Stay current with industry trends, emerging technologies, and best practices in data engineering and cloud computing and apply them to enhance existing processes.

Qualifications:

  • Bachelor's or Master's degree in Computer Science, Data Science, or a related field.
  • Proven experience as a Data Engineer with a focus on ETL pipeline development.
  • Strong proficiency in Python programming.
  • In-depth knowledge of AWS serverless technologies and services.
  • Familiarity with data warehousing concepts and tools (e.g., Redshift, Snowflake).
  • Experience with version control systems (e.g., Git).
  • Strong SQL skills for data extraction and transformation.
  • Excellent problem-solving and troubleshooting abilities.
  • Ability to work independently and collaboratively in a team environment.
  • Effective communication skills for articulating technical concepts to non-technical stakeholders.
  • Certifications such as AWS Certified Data Analytics - Specialty or AWS Certified DevOps Engineer are a plus.

Preferred Experience:

  • Knowledge of data orchestration and workflow management tools
  • Familiarity with data visualization tools (e.g., Tableau, Power BI).
  • Previous experience in industries with strict data compliance requirements (e.g., insurance, finance) is beneficial.

What You Can Expect:

- Innovation Abounds: Join a company that constantly pushes the boundaries of technology and encourages creative thinking. Your ideas and expertise will be valued and put to work in pioneering solutions.

- Collaborative Excellence: Be part of a team of engineers who are as passionate and skilled as you are. Together, you'll tackle challenging projects, learn from each other, and achieve remarkable results.

- Global Impact: Contribute to projects with a global reach and make a tangible difference. Your work will shape the future of technology in finance, banking, and corporate sectors.

They offer an exciting and professional environment with great career and growth opportunities. Their office is located in the heart of Salt Lake Sector V, offering a terrific workspace that's both accessible and inspiring. Their team members enjoy a professional work environment with regular team outings. Joining the team means becoming part of a vibrant and dynamic team where your skills will be valued, your creativity will be nurtured, and your contributions will make a difference. In this role, you can work alongside some of the brightest minds in the industry.

If you're ready to take your career to the next level and be part of a dynamic team that's driving innovation on a global scale, we want to hear from you.

Apply today for more information about this exciting opportunity.

Onsite Location: Kolkata, India (Salt Lake Sector V)


Read more
Bengaluru (Bangalore)
1 - 6 yrs
₹2L - ₹8L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+9 more

ROLE AND RESPONSIBILITIES

Should be able to work as an individual contributor and maintain good relationship with stakeholders. Should

be proactive to learn new skills per business requirement. Familiar with extraction of relevant data, cleanse and

transform data into insights that drive business value, through use of data analytics, data visualization and data

modeling techniques.


QUALIFICATIONS AND EDUCATION REQUIREMENTS

Technical Bachelor’s Degree.

Non-Technical Degree holders should have 1+ years of relevant experience.

Read more
Career Forge

at Career Forge

2 candid answers
Mohammad Faiz
Posted by Mohammad Faiz
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
5 - 7 yrs
₹12L - ₹15L / yr
Python
Apache Spark
PySpark
Data engineering
ETL
+10 more

🚀 Exciting Opportunity: Data Engineer Position in Gurugram 🌐


Hello 


We are actively seeking a talented and experienced Data Engineer to join our dynamic team at Reality Motivational Venture in Gurugram (Gurgaon). If you're passionate about data, thrive in a collaborative environment, and possess the skills we're looking for, we want to hear from you!


Position: Data Engineer  

Location: Gurugram (Gurgaon)  

Experience: 5+ years 


Key Skills:

- Python

- Spark, Pyspark

- Data Governance

- Cloud (AWS/Azure/GCP)


Main Responsibilities:

- Define and set up analytics environments for "Big Data" applications in collaboration with domain experts.

- Implement ETL processes for telemetry-based and stationary test data.

- Support in defining data governance, including data lifecycle management.

- Develop large-scale data processing engines and real-time search and analytics based on time series data.

- Ensure technical, methodological, and quality aspects.

- Support CI/CD processes.

- Foster know-how development and transfer, continuous improvement of leading technologies within Data Engineering.

- Collaborate with solution architects on the development of complex on-premise, hybrid, and cloud solution architectures.


Qualification Requirements:

- BSc, MSc, MEng, or PhD in Computer Science, Informatics/Telematics, Mathematics/Statistics, or a comparable engineering degree.

- Proficiency in Python and the PyData stack (Pandas/Numpy).

- Experience in high-level programming languages (C#/C++/Java).

- Familiarity with scalable processing environments like Dask (or Spark).

- Proficient in Linux and scripting languages (Bash Scripts).

- Experience in containerization and orchestration of containerized services (Kubernetes).

- Education in database technologies (SQL/OLAP and Non-SQL).

- Interest in Big Data storage technologies (Elastic, ClickHouse).

- Familiarity with Cloud technologies (Azure, AWS, GCP).

- Fluent English communication skills (speaking and writing).

- Ability to work constructively with a global team.

- Willingness to travel for business trips during development projects.


Preferable:

- Working knowledge of vehicle architectures, communication, and components.

- Experience in additional programming languages (C#/C++/Java, R, Scala, MATLAB).

- Experience in time-series processing.


How to Apply:

Interested candidates, please share your updated CV/resume with me.


Thank you for considering this exciting opportunity.

Read more
dataeaze systems

at dataeaze systems

1 recruiter
Ankita Kale
Posted by Ankita Kale
Remote only
5 - 8 yrs
₹12L - ₹22L / yr
Amazon Web Services (AWS)
Python
PySpark
ETL

POST - SENIOR DATA ENGINEER WITH AWS


Experience : 5 years


Must-have:

• Highly skilled in Python and PySpark

• Have expertise in writing Glue jobs ETL script, AWS

• Experience in working with Kafka

• Extensive SQL DB experience – Postgres

Good-to-have:

• Experience in working with data analytics and modelling

• Hands on Experience of PowerBI visualization tool

• Knowledge and hands-on on version control system - Git Common:

• Excellent communication and presentation skills (written and verbal) to all levels

of an organization

• Should be results oriented with ability to prioritize and drive multiple initiatives to

complete work you're doing on time

• Proven ability to influence a diverse geographically dispersed group of

individuals to facilitate, moderate, and influence productive design and implementation

discussions driving towards results


Shifts - Flexible ( might have to work as per US Shift timings for meetings ).

Employment Type - Any

Read more
hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
Python
Amazon Redshift
Amazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
globe teleservices
deepshikha thapar
Posted by deepshikha thapar
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹25L / yr
ETL
Python
Informatica
Talend



Good experience in the Extraction, Transformation, and Loading (ETL) of data from various sources into Data Warehouses and Data Marts using Informatica Power Center (Repository Manager,

Designer, Workflow Manager, Workflow Monitor, Metadata Manager), Power Connect as ETL tool on Oracle, and SQL Server Databases.



 Knowledge of Data Warehouse/Data mart, ODS, OLTP, and OLAP implementations teamed with

project scope, Analysis, requirements gathering, data modeling, ETL Design, development,

System testing, Implementation, and production support.

 Strong experience in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts

and Dimensions

 Used various transformations like Filter, Expression, Sequence Generator, Update Strategy,

Joiner, Stored Procedure, and Union to develop robust mappings in the Informatica Designer.

 Developed mapping parameters and variables to support SQL override.

 Created applets to use them in different mappings.

 Created sessions, configured workflows to extract data from various sources, transformed data,

and loading into the data warehouse.

 Used Type 1 SCD and Type 2 SCD mappings to update slowly Changing Dimension Tables.

 Modified existing mappings for enhancements of new business requirements.

 Involved in Performance tuning at source, target, mappings, sessions, and system levels.

 Prepared migration document to move the mappings from development to testing and then to

production repositories

 Extensive experience in developing Stored Procedures, Functions, Views and Triggers, Complex

SQL queries using PL/SQL.


 Experience in resolving on-going maintenance issues and bug fixes; monitoring Informatica

/Talend sessions as well as performance tuning of mappings and sessions.

 Experience in all phases of Data warehouse development from requirements gathering for the

data warehouse to develop the code, Unit Testing, and Documenting.

 Extensive experience in writing UNIX shell scripts and automation of the ETL processes using

UNIX shell scripting.

 Experience in using Automation Scheduling tools like Control-M.

 Hands-on experience across all stages of Software Development Life Cycle (SDLC) including

business requirement analysis, data mapping, build, unit testing, systems integration, and user

acceptance testing.

 Build, operate, monitor, and troubleshoot Hadoop infrastructure.

 Develop tools and libraries, and maintain processes for other engineers to access data and write

MapReduce programs.

Read more
Service based company
Agency job
via Vmultiply solutions by Mounica Buddharaju
Ahmedabad, Rajkot
2 - 4 yrs
₹3L - ₹6L / yr
Python
Amazon Web Services (AWS)
SQL
ETL


Qualifications :

  • Minimum 2 years of .NET development experience (ASP.Net 3.5 or greater and C# 4 or greater).
  • Good knowledge of MVC, Entity Framework, and Web API/WCF.
  • ASP.NET Core knowledge is preferred.
  • Creating APIs / Using third-party APIs
  • Working knowledge of Angular is preferred.
  • Knowledge of Stored Procedures and experience with a relational database (MSSQL 2012 or higher).
  • Solid understanding of object-oriented development principles
  • Working knowledge of web, HTML, CSS, JavaScript, and the Bootstrap framework
  • Strong understanding of object-oriented programming
  • Ability to create reusable C# libraries
  • Must be able to write clean comments, readable C# code, and the ability to self-learn.
  • Working knowledge of GIT

Qualities required :

Over above tech skill we prefer to have

  • Good communication and Time Management Skill.
  • Good team player and ability to contribute on a individual basis.

  • We provide the best learning and growth environment for candidates.












Skills:


    NET Core

   .NET Framework

    ASP.NET Core

    ASP.NET MVC

    ASP.NET Web API  

   C#

   HTML


Read more
Compile

at Compile

16 recruiters
Sarumathi NH
Posted by Sarumathi NH
Bengaluru (Bangalore)
7 - 10 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
Spark

You will be responsible for designing, building, and maintaining data pipelines that handle Real-world data at Compile. You will be handling both inbound and outbound data deliveries at Compile for datasets including Claims, Remittances, EHR, SDOH, etc.

You will

  • Work on building and maintaining data pipelines (specifically RWD).
  • Build, enhance and maintain existing pipelines in pyspark, python and help build analytical insights and datasets.
  • Scheduling and maintaining pipeline jobs for RWD.
  • Develop, test, and implement data solutions based on the design.
  • Design and implement quality checks on existing and new data pipelines.
  • Ensure adherence to security and compliance that is required for the products.
  • Maintain relationships with various data vendors and track changes and issues across vendors and deliveries.

You have

  • Hands-on experience with ETL process (min of 5 years).
  • Excellent communication skills and ability to work with multiple vendors.
  • High proficiency with Spark, SQL.
  • Proficiency in Data modeling, validation, quality check, and data engineering concepts.
  • Experience in working with big-data processing technologies using - databricks, dbt, S3, Delta lake, Deequ, Griffin, Snowflake, BigQuery.
  • Familiarity with version control technologies, and CI/CD systems.
  • Understanding of scheduling tools like Airflow/Prefect.
  • Min of 3 years of experience managing data warehouses.
  • Familiarity with healthcare datasets is a plus.

Compile embraces diversity and equal opportunity in a serious way. We are committed to building a team of people from many backgrounds, perspectives, and skills. We know the more inclusive we are, the better our work will be.         

Read more
A Product Based Client,Chennai
Chennai
4 - 8 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
Spark
PySpark
+2 more

Analytics Job Description

We are hiring an Analytics Engineer to help drive our Business Intelligence efforts. You will

partner closely with leaders across the organization, working together to understand the how

and why of people, team and company challenges, workflows and culture. The team is

responsible for delivering data and insights that drive decision-making, execution, and

investments for our product initiatives.

You will work cross-functionally with product, marketing, sales, engineering, finance, and our

customer-facing teams enabling them with data and narratives about the customer journey.

You’ll also work closely with other data teams, such as data engineering and product analytics,

to ensure we are creating a strong data culture at Blend that enables our cross-functional partners

to be more data-informed.


Role : DataEngineer 

Please find below the JD for the DataEngineer Role..

  Location: Guindy,Chennai

How you’ll contribute:

• Develop objectives and metrics, ensure priorities are data-driven, and balance short-

term and long-term goals


• Develop deep analytical insights to inform and influence product roadmaps and

business decisions and help improve the consumer experience

• Work closely with GTM and supporting operations teams to author and develop core

data sets that empower analyses

• Deeply understand the business and proactively spot risks and opportunities

• Develop dashboards and define metrics that drive key business decisions

• Build and maintain scalable ETL pipelines via solutions such as Fivetran, Hightouch,

and Workato

• Design our Analytics and Business Intelligence architecture, assessing and

implementing new technologies that fitting


• Work with our engineering teams to continually make our data pipelines and tooling

more resilient


Who you are:

• Bachelor’s degree or equivalent required from an accredited institution with a

quantitative focus such as Economics, Operations Research, Statistics, Computer Science OR 1-3 Years of Experience as a Data Analyst, Data Engineer, Data Scientist

• Must have strong SQL and data modeling skills, with experience applying skills to

thoughtfully create data models in a warehouse environment.

• A proven track record of using analysis to drive key decisions and influence change

• Strong storyteller and ability to communicate effectively with managers and

executives

• Demonstrated ability to define metrics for product areas, understand the right

questions to ask and push back on stakeholders in the face of ambiguous, complex

problems, and work with diverse teams with different goals

• A passion for documentation.

• A solution-oriented growth mindset. You’ll need to be a self-starter and thrive in a

dynamic environment.

• A bias towards communication and collaboration with business and technical

stakeholders.

• Quantitative rigor and systems thinking.

• Prior startup experience is preferred, but not required.

• Interest or experience in machine learning techniques (such as clustering, decision

tree, and segmentation)

• Familiarity with a scientific computing language, such as Python, for data wrangling

and statistical analysis

• Experience with a SQL focused data transformation framework such as dbt

• Experience with a Business Intelligence Tool such as Mode/Tableau


Mandatory Skillset:


-Very Strong in SQL

-Spark OR pyspark OR Python

-Shell Scripting


Read more
DeepIntent

at DeepIntent

2 candid answers
17 recruiters
Indrajeet Deshmukh
Posted by Indrajeet Deshmukh
Pune
2 - 5 yrs
Best in industry
Data Warehouse (DWH)
Informatica
ETL
SQL
Java
+1 more

Who You Are:


- In-depth and strong knowledge of SQL.

- Basic knowledge of Java.

- Basic scripting knowledge.

- Strong analytical skills.

- Excellent debugging skills and problem-solving.


What You’ll Do:


- Comfortable working in EST+IST Timezone

- Troubleshoot complex issues discovered in-house as well as in customer environments.

- Replicate customer environments/issues on Platform and Data and work to identify the root cause or provide interim workaround as needed.

- Ability to debug SQL queries associated with Data pipelines.

- Monitoring and debugging ETL jobs on a daily basis.

- Provide Technical Action plans to take a customer/product issue from start to resolution.

- Capture and document any Data incidents identified on Platform and maintain the history of such issues along with resolution.

- Identify product bugs and improvements based on customer environments and work to close them

- Ensure implementation/continuous improvement of formal processes to support product development activities.

- Good in external and internal communication across stakeholders.

Read more
globe teleservices
Bengaluru (Bangalore)
5 - 8 yrs
₹25L - ₹30L / yr
talend
Informatica
ETL

Good experience in Extraction, Transformation, and Loading (ETL) of data from various sources

into Data Warehouses and Data Marts using Informatica Power Center (Repository Manager,

Designer, Workflow Manager, Workflow Monitor, Metadata Manager), Power Connect as ETL

tool on Oracle, and SQL Server Databases.

 Knowledge of Data Warehouse/Data mart, ODS, OLTP, and OLAP implementations teamed with

project scope, Analysis, requirements gathering, data modeling, ETL Design, development,

System testing, Implementation, and production support.

 Strong experience in Dimensional Modeling using Star and Snow Flake Schema, Identifying Facts

and Dimensions

Read more
Mobile Programming LLC

at Mobile Programming LLC

1 video
34 recruiters
Sukhdeep Singh
Posted by Sukhdeep Singh
Bengaluru (Bangalore)
6 - 10 yrs
₹10L - ₹15L / yr
Data engineering
Nifi
DevOps
ETL

Job description Position: Data Engineer Experience: 6+ years Work Mode: Work from Office Location: Bangalore Please note: This position is focused on development rather than migration. Experience in Nifi or Tibco is mandatory.Mandatory Skills: ETL, DevOps platform, Nifi or Tibco We are seeking an experienced Data Engineer to join our team. As a Data Engineer, you will play a crucial role in developing and maintaining our data infrastructure and ensuring the smooth operation of our data platforms. The ideal candidate should have a strong background in advanced data engineering, scripting languages, cloud and big data technologies, ETL tools, and database structures.

 

Responsibilities: •  Utilize advanced data engineering techniques, including ETL (Extract, Transform, Load), SQL, and other advanced data manipulation techniques. •   Develop and maintain data-oriented scripting using languages such as Python. •   Create and manage data structures to ensure efficient and accurate data storage and retrieval. •   Work with cloud and big data technologies, specifically AWS and Azure stack, to process and analyze large volumes of data. •   Utilize ETL tools such as Nifi and Tibco to extract, transform, and load data into various systems. •   Have hands-on experience with database structures, particularly MSSQL and Vertica, to optimize data storage and retrieval. •   Manage and maintain the operations of data platforms, ensuring data availability, reliability, and security. •   Collaborate with cross-functional teams to understand data requirements and design appropriate data solutions. •   Stay up-to-date with the latest industry trends and advancements in data engineering and suggest improvements to enhance our data infrastructure.

 

Requirements: •  A minimum of 6 years of relevant experience as a Data Engineer. •  Proficiency in ETL, SQL, and other advanced data engineering techniques. •   Strong programming skills in scripting languages such as Python. •   Experience in creating and maintaining data structures for efficient data storage and retrieval. •   Familiarity with cloud and big data technologies, specifically AWS and Azure stack. •   Hands-on experience with ETL tools, particularly Nifi and Tibco. •   In-depth knowledge of database structures, including MSSQL and Vertica. •   Proven experience in managing and operating data platforms. •   Strong problem-solving and analytical skills with the ability to handle complex data challenges. •   Excellent communication and collaboration skills to work effectively in a team environment. •   Self-motivated with a strong drive for learning and keeping up-to-date with the latest industry trends.

Read more
Emids Technologies

at Emids Technologies

2 candid answers
Rima Mishra
Posted by Rima Mishra
Bengaluru (Bangalore)
5 - 10 yrs
₹4L - ₹18L / yr
Jasper
JasperReports
ETL
JasperSoft
OLAP
+3 more

Job Description - Jasper 

  • Knowledge of Jasper report server administration, installation and configuration
  • Knowledge of report deployment and configuration
  • Knowledge of Jaspersoft Architecture and Deployment
  • Knowledge of User Management in Jaspersoft Server
  • Experience in developing Complex Reports using Jaspersoft Server and Jaspersoft Studio
  • Understand the Overall architecture of Jaspersoft BI
  • Experience in creating Ad Hoc Reports, OLAP, Views, Domains
  • Experience in report server (Jaspersoft) integration with web application
  • Experience in JasperReports Server web services API and Jaspersoft Visualise.JS Web service API
  • Experience in creating dashboards with visualizations
  • Experience in security and auditing, metadata layer
  • Experience in Interacting with stakeholders for requirement gathering and Analysis
  • Good knowledge ETL design and development, logical and physical data modeling (relational and dimensional)
  • Strong self- initiative to strive for both personal & technical excellence.
  • Coordinate efforts across Product development team and Business Analyst team.
  • Strong business and data analysis skills.
  • Domain knowledge of Healthcare an advantage.
  • Should be strong on Co- ordinate with onshore resources on development.
  • Data oriented professional with good communications skills and should have a great eye for detail.
  • Interpret data, analyze results and provide insightful inferences
  • Maintain relationship with Business Intelligence stakeholders
  • Strong Analytical and Problem Solving skills 


Read more
Personal Care Product Manufacturing
Mumbai
3 - 8 yrs
₹12L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

DATA ENGINEER


Overview

They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.

 

Job Description:

We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.


Responsibilities:

Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.

Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.

Optimize and tune the performance of data systems to ensure efficient data processing and analysis.

Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.

Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.

Implement and maintain data governance and security measures to protect sensitive data.

Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.

Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.


Qualifications:

Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.

Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.

Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).

Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).

Solid understanding of data modeling, data warehousing, and ETL principles.

Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).

Strong problem-solving and analytical skills, with the ability to handle complex data challenges.

Excellent communication and collaboration skills to work effectively in a team environment.


Preferred Qualifications:

Advanced knowledge of distributed computing and parallel processing.

Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).

Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).

Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).

Experience with data visualization and reporting tools (e.g., Tableau, Power BI).

Certification in relevant technologies or data engineering disciplines.



Read more
Invesco

at Invesco

4 candid answers
Chandana Srii
Posted by Chandana Srii
Hyderabad
3 - 7 yrs
₹10L - ₹30L / yr
React.js
Javascript
Plotly
R Language
D3.js
+2 more

Invesco is seeking a skilled React.js Developer with a strong background in data analytics to join our team. The Engineer within the Enterprise Risk function will manage complex data engineering, analysis, and programming tasks in support of the execution of the Enterprise Risk and Internal Audit activities and projects as defined by the Enterprise Risk Analytics leadership team. The Engineer will manage and streamline data extraction, transformation, and load processes, design use cases, analyze data, and apply creativity and data science techniques to deliver effective and efficient solutions enabling greater risk intelligence and insights.

 

Key Responsibilities / Duties:

 

  • Acquire, transform, and manage data supporting risk and control-related activities
  • Design, build, and maintain data analytics and data science tools supporting individual tasks and projects.
  • Maintain programming code, software packages, and databases to support ongoing analytics activities.
  • Exercise judgment in determining the application of data analytics for business processes, including the identification and location of data sources.
  • Actively discover data analytics capabilities within the firm and leverage such capabilities where possible.
  • Introduce new data analytics-related tools and technologies to business partners and consumers.
  • Support business partners, data analysts, consumers’ understanding of product logic and related system processes
  • Share learnings and alternative techniques with other Engineers and Data Analysts.
  • Perform other duties and special projects as assigned by the Enterprise Risk Analytics leadership team and other leaders across Enterprise Risk and Internal Audit.
  • Actively contribute to developing a culture of innovation within the department and risk and control awareness throughout the organization.
  • Keep Head of Data Science & Engineering and departmental leadership informed of activities.

Work Experience / Knowledge:

 

  • Minimum 5 years of experience in data analysis, data management, software development or data-related risk management roles; previous experience in programming will be considered.
  • Experience within the financial services sector preferred

Skills / Other Personal Attributes Required:

 

  • Proactive problem solver with the ability to identify, design, and deliver solutions based on high level objectives and detailed requirements. Thoroughly identify and investigate issues and determine the appropriate course of action
  • Excellent in code development supporting data analytics and visualization, preferably programming languages and libraries such as JavaScript, R, Python, NextUI, ReactUI, Shiny, Streamlit, Plotly and D3.js
  • Excellent with data extraction, transformation, and load processes (ETL), structured query language (SQL), and database management. 
  • Strong self-learner to continuously develop new technical capabilities to become more efficient and productive
  • Experience using end user data analytics software such as Tableau, PowerBI, SAS, and Excel a plus
  • Proficient with various disciplines of data science, such as machine learning, natural language processing and network science
  • Experience with end-to-end implementation of web applications on AWS, including using services such as EC2, EKS, RDS, ALB, Route53 and Airflow a plus
  • Self-starter and motivated; must be able to work without frequent direct supervision
  • Proficient with Microsoft Office applications (Teams, Outlook, MS Word, Excel, PowerPoint etc.)
  • Excellent analytical and problem-solving skills
  • Strong project management and administrative skills
  • Strong written and verbal communication skills (English)
  • Results-oriented and comfortable as an individual contributor on specific assignments
  • Ability to handle confidential information and communicate clearly with individuals at a wide range of levels on sensitive matters
  • Demonstrated ability to work in a diverse, cross-functional, and international environment
  • Adaptable and comfortable with changing environment
  • Demonstrates high professional ethics

Formal Education:

 

  • Bachelor’s degree in Information Systems, Computer Science, Computer Engineering, Mathematics, Statistics, Data Science, or Statistics preferred. Other technology or quantitative finance-related degrees considered depending upon relevant experience
  • MBA, Master’s degree in Information Systems, Computer Science, Mathematics, Statistics, Data Science, or Finance a plus

License / Registration / Certification:

 

  • Professional data science, analytics, business intelligence, visualization, and/or development designation (e.g., CAP, CBIP, or other relevant product-specific certificates) or actively pursuing the completion of such designation preferred
  • Other certifications considered depending on domain and relevant experience

Working Conditions:

 

Potential for up to 10% domestic and international travel

 

Read more
TekClan
Tanu Shree
Posted by Tanu Shree
Chennai
4 - 7 yrs
Best in industry
MS SQLServer
SQL Programming
SQL
ETL
ETL management
+5 more

Company - Tekclan Software Solutions

Position – SQL Developer

Experience – Minimum 4+ years of experience in MS SQL server, SQL Programming, ETL development.

Location - Chennai


We are seeking a highly skilled SQL Developer with expertise in MS SQL Server, SSRS, SQL programming, writing stored procedures, and proficiency in ETL using SSIS. The ideal candidate will have a strong understanding of database concepts, query optimization, and data modeling.


Responsibilities:

1. Develop, optimize, and maintain SQL queries, stored procedures, and functions for efficient data retrieval and manipulation.

2. Design and implement ETL processes using SSIS for data extraction, transformation, and loading from various sources.

3. Collaborate with cross-functional teams to gather business requirements and translate them into technical specifications.

4. Create and maintain data models, ensuring data integrity, normalization, and performance.

5. Generate insightful reports and dashboards using SSRS to facilitate data-driven decision making.

6. Troubleshoot and resolve database performance issues, bottlenecks, and data inconsistencies.

7. Conduct thorough testing and debugging of SQL code to ensure accuracy and reliability.

8. Stay up-to-date with emerging trends and advancements in SQL technologies and provide recommendations for improvement.

9. Should be an independent and individual contributor.


Requirements:

1. Minimum of 4+ years of experience in MS SQL server, SQL Programming, ETL development.

2. Proven experience as a SQL Developer with a strong focus on MS SQL Server.

3. Proficiency in SQL programming, including writing complex queries, stored procedures, and functions.

4. In-depth knowledge of ETL processes and hands-on experience with SSIS.

5. Strong expertise in creating reports and dashboards using SSRS.

6. Familiarity with database design principles, query optimization, and data modeling.

7. Experience with performance tuning and troubleshooting SQL-related issues.

8. Excellent problem-solving skills and attention to detail.

9. Strong communication and collaboration abilities.

10. Ability to work independently and handle multiple tasks simultaneously.


Preferred Skills:

1. Certification in MS SQL Server or related technologies.

2. Knowledge of other database systems such as Oracle or MySQL.

3. Familiarity with data warehousing concepts and tools.

4. Experience with version control systems.

Read more
Gipfel & Schnell Consultings Pvt Ltd
TanmayaKumar Pattanaik
Posted by TanmayaKumar Pattanaik
Bengaluru (Bangalore)
3 - 9 yrs
₹9L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+10 more

Qualifications & Experience:


▪ 2 - 4 years overall experience in ETLs, data pipeline, Data Warehouse development and database design

▪ Software solution development using Hadoop Technologies such as MapReduce, Hive, Spark, Kafka, Yarn/Mesos etc.

▪ Expert in SQL, worked on advanced SQL for at least 2+ years

▪ Good development skills in Java, Python or other languages

▪ Experience with EMR, S3

▪ Knowledge and exposure to BI applications, e.g. Tableau, Qlikview

▪ Comfortable working in an agile environment

Read more
InfoCepts
Lalsaheb Bepari
Posted by Lalsaheb Bepari
Chennai, Pune, Nagpur
7 - 10 yrs
₹5L - ₹15L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more

Responsibilities:

 

• Designing Hive/HCatalog data model includes creating table definitions, file formats, compression techniques for Structured & Semi-structured data processing

• Implementing Spark processing based ETL frameworks

• Implementing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

• Modifying the Informatica-Teradata & Unix based data pipeline

• Enhancing the Talend-Hive/Spark & Unix based data pipelines

• Develop and Deploy Scala/Python based Spark Jobs for ETL processing

• Strong SQL & DWH concepts.

 

Preferred Background:

 

• Function as integrator between business needs and technology solutions, helping to create technology solutions to meet clients’ business needs

• Lead project efforts in defining scope, planning, executing, and reporting to stakeholders on strategic initiatives

• Understanding of EDW system of business and creating High level design document and low level implementation document

• Understanding of Big Data Lake system of business and creating High level design document and low level implementation document

• Designing Big data pipeline for Data Ingestion, Storage, Processing & Consumption

Read more
Shiprocket

at Shiprocket

5 recruiters
Kailuni Lanah
Posted by Kailuni Lanah
Gurugram
4 - 10 yrs
₹25L - ₹35L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+4 more

We are seeking an experienced Senior Data Platform Engineer to join our team. The ideal candidate should have extensive experience with Pyspark, Airflow, Presto, Hive, Kafka and Debezium, and should be passionate about developing scalable and reliable data platforms.

Responsibilities:

  • Design, develop, and maintain our data platform architecture using Pyspark, Airflow, Presto, Hive, Kafka, and Debezium.
  • Develop and maintain ETL processes to ingest, transform, and load data from various sources into our data platform.
  • Work closely with data analysts, data scientists, and other stakeholders to understand their requirements and design solutions that meet their needs.
  • Implement and maintain data governance policies and procedures to ensure data quality, privacy, and security.
  • Continuously monitor and optimize the performance of our data platform to ensure scalability, reliability, and cost-effectiveness.
  • Keep up-to-date with the latest trends and technologies in the field of data engineering and share knowledge and best practices with the team.

Requirements:

  • Bachelor's degree in Computer Science, Information Technology, or related field.
  • 5+ years of experience in data engineering or related fields.
  • Strong proficiency in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium.
  • Experience with data warehousing, data modeling, and data governance.
  • Experience working with large-scale distributed systems and cloud platforms (e.g., AWS, GCP, Azure).
  • Strong problem-solving skills and ability to work independently and collaboratively.
  • Excellent communication and interpersonal skills.

If you are a self-motivated and driven individual with a passion for data engineering and a strong background in Pyspark, Airflow, Presto, Hive, Datalake, and Debezium, we encourage you to apply for this exciting opportunity. We offer competitive compensation, comprehensive benefits, and a collaborative work environment that fosters innovation and growth.

Read more
Exponentia.ai

at Exponentia.ai

1 product
1 recruiter
Vipul Tiwari
Posted by Vipul Tiwari
Mumbai
7 - 10 yrs
₹13L - ₹19L / yr
Project Management
IT project management
Software project management
Business Intelligence (BI)
Data Warehouse (DWH)
+8 more

Role: Project Manager

Experience: 8-10 Years

Location: Mumbai


Company Profile:



Exponentia.ai is an AI tech organization with a presence across India, Singapore, the Middle East, and the UK. We are an innovative and disruptive organization, working on cutting-edge technology to help our clients transform into the enterprises of the future. We provide artificial intelligence-based products/platforms capable of automated cognitive decision-making to improve productivity, quality, and economics of the underlying business processes. Currently, we are rapidly expanding across machine learning, Data Engineering and Analytics functions. Exponentia.ai has developed long-term relationships with world-class clients such as PayPal, PayU, SBI Group, HDFC Life, Kotak Securities, Wockhardt and Adani Group amongst others.

One of the top partners of Data bricks, Azure, Cloudera (leading analytics player) and Qlik (leader in BI technologies), Exponentia.ai has recently been awarded the ‘Innovation Partner Award’ by Qlik and "Excellence in Business Process Automation Award" (IMEA) by Automation Anywhere.

Get to know more about us at http://www.exponentia.ai and https://in.linkedin.com/company/exponentiaai 


Role Overview:


·        Project manager shall be responsible to oversee and take responsibility for the successful delivery of a range of projects in Business Intelligence, Data warehousing, and Analytics/AI-ML.

·        Project manager is expected to manage the project and lead the teams of BI engineers, data engineers, data scientists and application developers.


Job Responsibilities:


·        Efforts estimation, creating a project plan, planning milestones, activities and tracking the progress.

·        Identify risks and issues. Come up with a mitigation plan.

·        Status reporting to both internal and external stakeholders.

·        Communicate with all stakeholders.

·        Manage end-to-end project lifecycle - requirements gathering, design, development, testing and go-live.

·        Manage end-to-end BI or data warehouse projects.

·        Must have experience in running Agile-based project development.


Technical skills


·        Experience in Business Intelligence Data warehousing or Analytics projects.

·        Understand data lake and data warehouse solutions including ETL pipelines.

·        Good to have - Knowledge of Azure blob storage, azure data factory and Synapse analytics.

·        Good to have - Knowledge of Qlik Sense or Power BI

·        Good to have - Certified in PMP/Prince 2 / Agile Project management.

·        Excellent written and verbal communication skills.


 Education:

MBA, B.E. or B. Tech. or MCA degree

Read more
Hyderabad, Bengaluru (Bangalore)
8 - 12 yrs
₹30L - ₹50L / yr
PHP
Javascript
React.js
Angular (2+)
AngularJS (1.x)
+17 more

CTC Budget: 35-50LPA

Location: Hyderabad/Bangalore

Experience: 8+ Years


Company Overview:


An 8-year-old IT Services and consulting company based in Hyderabad providing services in maximizing product value while delivering rapid incremental innovation, possessing extensive SaaS company M&A experience including 20+ closed transactions on both the buy and sell sides. They have over 100 employees and looking to grow the team.


Work with, learn from, and contribute to a diverse, collaborative

development team

● Use plenty of PHP, Go, JavaScript, MySQL, PostgreSQL, ElasticSearch,

Redshift, AWS Services and other technologies

● Build efficient and reusable abstractions and systems

● Create robust cloud-based systems used by students globally at scale

● Experiment with cutting edge technologies and contribute to the

company’s product roadmap


● Deliver data at scale to bring value to clients Requirements


You will need:

● Experience working with a server side language in a full-stack environment

● Experience with various database technologies (relational, nosql,

document-oriented, etc) and query concepts in high performance

environments

● Experience in one of these areas: React, Backbone

● Understanding of ETL concepts and processes

● Great knowledge of design patterns and back end architecture best

practices

● Sound knowledge of Front End basics like JavaScript, HTML, CSS

● Experience with TDD, automated testing

● 12+ years’ experience as a developer

● Experience with Git or Mercurial

● Fluent written & spoken English

It would be great if you have:

● B.Sc or M.Sc degree in Software Engineering, Computer Science or similar

● Experience and/or interest in API Design

● Experience with Symfony and/or Doctrine

● Experience with Go and Microservices

● Experience with message queues e.g. SQS, Kafka, Kinesis, RabbitMQ

● Experience working with a modern Big Data stack

● Contributed to open source projects

● Experience working in an Agile environment

Read more
DHL

at DHL

1 recruiter
Garima Saraswat
Posted by Garima Saraswat
Malaysia
2 - 4 yrs
₹12L - ₹15L / yr
Oracle BI Publisher
Oracle HCM
SQL Server Reporting Services (SSRS)
Business Intelligence (BI)
Oracle Business Intelligence Suite Enterprise Edition (OBIEE)
+3 more

Are you a motivated, organized person seeking a demanding and rewarding opportunity in a fast-paced environment? Would you enjoy being part of a dedicated team that works together to create a relevant, memorable difference in the lives of our customers and employees? If you're looking for change, and you're ready to make changes … we're looking for you.


This role is part of our global Team and will be responsible for driving our digitalization roadmap. You will be responsible for analyzing reporting requirements and defining solutions that meet or exceed those requirements. You will need to understand and apply systems analysis concepts and principles to effectively translate and validate business systems solutions. Further, you will apply IT and internal team methodologies and procedures to ensure solutions are defined in a consistent, standard and repeatable method.


Responsibilities

What are you accountable for achieving as Senior Oracle Fusion Reporting Specialist?

  • Design, Development and Support of Oracle Reporting applications and Dashboards.
  • Interact with internal stakeholders and translate business needs to technical specifications
  • Preparing BIP reports (Data Model and Report Templates)
  • Effectively deliver projects and ongoing support for Oracle HCM BI solutions.
  • Develop data models and reports.


Requirements

What will you need as a successful Senior Oracle Fusion Reporting Specialist / Developer?

  • Bachelor's Degree in Computer Science or more than 2 years of experience in business intelligence projects
  • Experience in programming with Oracle tools and in writing SQL Server/Oracle/SQL Stored Procedures and functions.
  • Experience in a BI environment.
  • Broad understanding of Oracle HCM Cloud Applications and database structure of the HCM application module.
  • Exposure to Oracle BI, Automation, JIRA and ETL will be added advantage.


Read more
Vithamas Technologies Pvt LTD
Mysore
4 - 6 yrs
₹10L - ₹20L / yr
Data modeling
ETL
Oracle
MS SQLServer
MongoDB
+4 more

RequiredSkills:


• Minimum of 4-6 years of experience in data modeling (including conceptual, logical and physical data models. • 2-3 years of experience inExtraction, Transformation and Loading ETLwork using data migration tools like Talend, Informatica, Datastage, etc. • 4-6 years of experience as a database developerinOracle, MS SQLor another enterprise database with a focus on building data integration process • Candidate should haveanyNoSqltechnology exposure preferably MongoDB. • Experience in processing large data volumes indicated by experience with BigDataplatforms (Teradata, Netezza, Vertica or Cloudera, Hortonworks, SAP HANA, Cassandra, etc.). • Understanding of data warehousing concepts and decision support systems.


• Ability to deal with sensitive and confidential material and adhere to worldwide data security and • Experience writing documentation for design and feature requirements. • Experience developing data-intensive applications on cloud-based architectures and infrastructures such as AWS, Azure etc. • Excellent communication and collaboration skills.

Read more
TreQ

at TreQ

Nidhi Tiwari
Posted by Nidhi Tiwari
Mumbai
2 - 5 yrs
₹7L - ₹12L / yr
ETL
Informatica
Data Warehouse (DWH)
Amazon Web Services (AWS)
PostgreSQL
+1 more

Responsibilities :


  • Involve in planning, design, development and maintenance of large-scale data repositories, pipelines, analytical solutions and knowledge management strategy
  • Build and maintain optimal data pipeline architecture to ensure scalability, connect operational systems data for analytics and business intelligence (BI) systems
  • Build data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader
  • Reporting and obtaining insights from large data chunks on import/export and communicating relevant pointers for helping in decision-making
  • Preparation, analysis, and presentation of reports to the management for further developmental activities
  • Anticipate, identify and solve issues concerning data management to improve data quality


Requirements :


  • Ability to build and maintain ETL pipelines 
  • Technical Business Analysis experience and hands-on experience developing functional spec
  • Good understanding of Data Engineering principles including data modeling methodologies
  • Sound understanding of PostgreSQL
  • Strong analytical and interpersonal skills as well as reporting capabilities
Read more
3K Technologies Pvt Ltd
Agency job
via 3K Technologies Pvt Ltd by Zinal Kothadia
Bengaluru (Bangalore)
3 - 7 yrs
₹5L - ₹20L / yr
Go Programming (Golang)
Ruby on Rails (ROR)
Ruby
Python
Java
+3 more

Skill for BI Backend developer -

-SQL development (Snowflake preferred but any other version of SQL would work)

- ETL

- Stored Procedures

- Creating jobs and work flows in SQL

- some business knowledge

Read more
Gurugram
5 - 8 yrs
₹15L - ₹23L / yr
SQL
Python
Amazon Web Services (AWS)
ETL

About the co.– Our client is an agency of the world’s largest media investment company which is a part of WPP. It is a global digital transformation agency with 1200 employees across 21 nations. Our team of experts support clients in programmatic, social, paid search, analytics, technology, organic search, affiliate marketing, e-commerce and across traditional channels. 


Job Location: Gurgaon


Reporting of the role This role reports to the Technology Architect,


Key Accountabilities: This role is for a Technical Specialist who can understand and solves complex functional, technical, and architectural practices that cover data and can understand the end-to-end data lifecycle capabilities and technologies and provide architectural guidance in the selection, articulation, and use of the chosen solutions.


The successful candidate will be expected to interact with all levels of the business and technical community, seeking strong engagement with all stakeholders. The candidate should be an expert in functional data architecture & design, including strong data modelling. A candidate who is self-disciplined, with a keenness to master, suggest, and work with different technologies & toolsets. The role would also involve intensive interaction with the business and other technology functions, so good communication skills and the ability to work under pressure is essential


What you’ll bring:


 5-7 years of strong experience in working with SQL, Python, ETL development, and AWS.

 Strong Experience in writing complex SQLs

 Good Communication skills  Work with senior staff and business leaders to identify requirements for data/information across the core business domains

 Work with Project Managers Business Analysts and other subject matter experts to identify and understand requirements

 Good experience working with any BI tool like Tableau, or Power BI.  Familiar with various cloud technologies and their offerings within data specialization and Data Warehousing.

 Snowflake is good to have. 


Minimum qualifications:

 B. Tech./MCA preferred

 Excellent 5 years Hand on experience in Big data, ETL Development, and Data Processing


Regards

Team Merito

Read more
CareStack
Suman Narayan U
Posted by Suman Narayan U
Thiruvananthapuram, Bengaluru (Bangalore)
2 - 5 yrs
Best in industry
.NET
ASP.NET
C#
Data Structures
Algorithms
+7 more

 

Company Overview

 

CareStack is a complete cloud-based dental software solution for scheduling, clinical, billing, patient engagement, and reporting needs of dental offices of any size - whether it's a single location or a large multi-site DSO with hundreds of locations.

The company was founded in 2015 and the commercial launch was done in early 2018. Since then, more than 1000 offices have chosen CareStack as their single source of truth. This is the fastest growth till date in the dental practice management software market, dominated by 100 year old distribution companies.

 

 

More about CareStack

 

●    Rated by independent B2B software reviews and research analysts as the most modern, innovative and customer experience focussed company in the space with the fastest growth in the segment.

●    Important strategic go to market partnerships with dental industry leaders like Delta Dental, Darby Dental and several others.

●    Venture backed with over $60M raised from leading financial and strategic investors.

●    HQ'd in Orlando, FL with offices in Minnesota, Bangalore, Trivandrum and Cochin.


Role Overview

 

CareStack seeks to hire Software Development Engineer - 1 to build its next generation healthcare platform. You will be reporting to a Staff SDE in your business unit , to develop and release services and solutions for our platform, aligning with your business unit’s goals and objectives.

 

Key responsibilities


●    Technical Design

You can be a specialist in technology areas, but capable of creating complete designs to solve a specific problem that accomplishes a definitive product feature or enables a technical advancement.You should work with your lead to review your design and embrace the feedback

●    Process Definition/Adherence

You should deliver estimations, review test scenarios/cases created by QAs in your team, participate in sprint grooming and planning meetings. You must learn, practice and evangelize standard techniques for grooming, defining complexity of stories and estimation.

●    Code Quality

At Carestack we believe that your code reflects your character. Code for scale, produce maintainable reusable code. You must commit to clean coding, continuous refactoring to ensure a high level of code quality. Continuously learn, practice and evangelize coding patterns/best practices within and outside your team. You should ensure testability of team functional areas, facilitate integration testing, resolve deep rooted technical issues and proactively collaborate with team members in solving complex problems.

●    Team Player

You should proactively communicate to resolve dependencies within and outside the team. Understand organizations culture code and streamline conversations and activities that will further instill this code. Mentor and coach new additions to your team.

●    Value/Mission Alignment

Be a champion for CareStack within the Engineering team. Help drive workplace and cultural norms within your business unit that align with CareStack company values.

 

This role may be for you if you…


●    Have an insatiable itch to join and the courage to scale an early-stage technology company.

●    Have 2-4 years of experience in building modern day web platforms, using Microsoft technologies on the backend.

●    Can be a generalist or specialist with deep understanding of building software components that meet defined requirements, with good understanding of .NET Core/ASP.NET.

●    Are proficient in data structures and algorithms, and object oriented analysis and design of software systems.

●    Are a backend specialist with good understanding of event driven programming, distributed systems, caching/in-memory computing systems, data modeling for transactional systems.

●    Solid understanding of relational and non relational databases including warehouses.Preferably MySQL, MSSQL, Vertica and CosmosDB

●    Solid understanding of CI/CD patterns and IaC.

●    Expertise in building ETLs in both batch and streaming model.

●    Expertise in building and testing APIs and micro services


This role may not be for you if you…


●    Don’t have the itch to design solutions and write code and you don’t grab every other opportunity to review and write code.

●    Don’t have the fire in you to fight for your design suggestions and debate with logical data points

●    Don’t have the trait to be transactional in code reviews/design discussions, accept your mistakes and appreciate better ideas from your peers.

●    Haven’t developed the habit of doing objective conversations that are data driven.

Read more
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
10 - 15 yrs
₹10L - ₹15L / yr
Data Warehouse (DWH)
Informatica
ETL
Amazon Web Services (AWS)
Migration

Greetings !!!


Looking Urgently !!!


Exp-Min 10 Years

Location-Delhi

Sal-nego



Role

AWS Data Migration Consultant

Provide Data Migration strategy, expert review and guidance on Data Migration from onprem to AWS infrastructure that includes AWS Fargate, PostgreSQL, DynamoDB. This includes review and SME inputs on:

·       Data migration plan, architecture, policies, procedures

·       Migration testing methodologies

·       Data integrity, consistency, resiliency.

·       Performance and Scalability

·       Capacity planning

·       Security, access control, encryption

·       DB replication and clustering techniques

·       Migration risk mitigation approaches

·       Verification and integrity testing, reporting (Record and field level verifications)

·       Schema consistency and mapping

·       Logging, error recovery

·       Dev-test, staging and production artifact promotions and deployment pipelines

·       Change management

·       Backup, DR approaches and best practices.


Qualifications

  • Worked on mid to large scale data migration projects, specifically from on-prem to AWS, preferably in BFSI domain
  • Deep expertise in AWS Redshift, PostgreSQL, DynamoDB from data management, performance, scalability and consistency standpoint
  • Strong knowledge of AWS Cloud architecture and components, solutions, well architected frameworks
  • Expertise in SQL and DB performance related aspects
  • Solution Architecture work for enterprise grade BFSI applications
  • Successful track record of defining and implementing data migration strategies
  • Excellent communication and problem solving skills
  • 10+ Yrs experience in Technology, at least 4+yrs in AWS and DBA/DB Management/Migration related work
  • Bachelors degree or higher in Engineering or related field


Read more
Mumbai
10 - 15 yrs
₹8L - ₹15L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+6 more

Exp-Min 10 Years

Location Mumbai

Sal-Nego

 

 

Powerbi, Tableau, QlikView,

 

 

Solution Architect/Technology Lead – Data Analytics

 

Role

Looking for Business Intelligence lead (BI Lead) having hands on experience BI tools (Tableau, SAP Business Objects, Financial and Accounting modules, Power BI), SAP integration, and database knowledge including one or more of Azure Synapse/Datafactory, SQL Server, Oracle, cloud-based DB Snowflake. Good knowledge of AI-ML, Python is also expected.

  • You will be expected to work closely with our business users. The development will be performed using an Agile methodology which is based on scrum (time boxing, daily scrum meetings, retrospectives, etc.) and XP (continuous integration, refactoring, unit testing, etc) best practices. Candidates must therefore be able to work collaboratively, demonstrate good ownership, leadership and be able to work well in teams.
  • Responsibilities :
  • Design, development and support of multiple/hybrid Data sources, data visualization Framework using Power BI, Tableau, SAP Business Objects etc. and using ETL tools, Scripting, Python Scripting etc.
  • Implementing DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of working code-utilizing tools like Git. Primary Skills

Requirements

  • 10+ years working as a hands-on developer in Information Technology across Database, ETL and BI (SAP Business Objects, integration with SAP Financial and Accounting modules, Tableau, Power BI) & prior team management experience
  • Tableau/PowerBI integration with SAP and knowledge of SAP modules related to finance is a must
  • 3+ years of hands-on development experience in Data Warehousing and Data Processing
  • 3+ years of Database development experience with a solid understanding of core database concepts and relational database design, SQL, Performance tuning
  • 3+ years of hands-on development experience with Tableau
  • 3+ years of Power BI experience including parameterized reports and publishing it on PowerBI Service
  • Excellent understanding and practical experience delivering under an Agile methodology
  • Ability to work with business users to provide technical support
  • Ability to get involved in all the stages of project lifecycle, including analysis, design, development, testing, Good To have Skills
  • Experience with other Visualization tools and reporting tools like SAP Business Objects.

 

Read more
Mactores Cognition Private Limited
Remote only
5 - 15 yrs
₹5L - ₹21L / yr
ETL
Informatica
Data Warehouse (DWH)
Amazon Web Services (AWS)
Amazon S3
+3 more

Mactores is a trusted leader among businesses in providing modern data platform solutions. Since 2008, Mactores have been enabling businesses to accelerate their value through automation by providing End-to-End Data Solutions that are automated, agile, and secure. We collaborate with customers to strategize, navigate, and accelerate an ideal path forward with a digital transformation via assessments, migration, or modernization.


We are looking for a DataOps Engineer with expertise while operating a data lake. Amazon S3, Amazon EMR, and Apache Airflow for workflow management are used to build the data lake.


You have experience of building and running data lake platforms on AWS. You have exposure to operating PySpark-based ETL Jobs in Apache Airflow and Amazon EMR. Expertise in monitoring services like Amazon CloudWatch.


If you love solving problems using yo, professional services background, usual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself.


What you will do?


  • Operate the current data lake deployed on AWS with Amazon S3, Amazon EMR, and Apache Airflow
  • Debug and fix production issues in PySpark.
  • Determine the RCA (Root cause analysis) for production issues.
  • Collaborate with product teams for L3/L4 production issues in PySpark.
  • Contribute to enhancing the ETL efficiency
  • Build CloudWatch dashboards for optimizing the operational efficiencies
  • Handle escalation tickets from L1 Monitoring engineers
  • Assign the tickets to L1 engineers based on their expertise


What are we looking for?


  • AWS data Ops engineer.
  • Overall 5+ years of exp in the software industry Exp in developing architecture data applications using python or scala, Airflow, and Kafka on AWS Data platform Experience and expertise.
  • Must have set up or led the project to enable Data Ops on AWS or any other cloud data platform.
  • Strong data engineering experience on Cloud platform, preferably AWS.
  • Experience with data pipelines designed for reuse and use parameterization.
  • Experience of pipelines was designed to solve common ETL problems.
  • Understanding or experience on various AWS services can be codified for enabling DataOps like Amazon EMR, Apache Airflow.
  • Experience in building data pipelines using CI/CD infrastructure.
  • Understanding of Infrastructure as code for DataOps ennoblement.
  • Ability to work with ambiguity and create quick PoCs.


You will be preferred if


  • Expertise in Amazon EMR, Apache Airflow, Terraform, CloudWatch
  • Exposure to MLOps using Amazon Sagemaker is a plus.
  • AWS Solutions Architect Professional or Associate Level Certificate
  • AWS DevOps Professional Certificate


Life at Mactores


We care about creating a culture that makes a real difference in the lives of every Mactorian. Our 10 Core Leadership Principles that honor Decision-making, Leadership, Collaboration, and Curiosity drive how we work.


1. Be one step ahead

2. Deliver the best

3. Be bold

4. Pay attention to the detail

5. Enjoy the challenge

6. Be curious and take action

7. Take leadership

8. Own it

9. Deliver value

10. Be collaborative


We would like you to read more details about the work culture on https://mactores.com/careers 


The Path to Joining the Mactores Team

At Mactores, our recruitment process is structured around three distinct stages:


Pre-Employment Assessment: 

You will be invited to participate in a series of pre-employment evaluations to assess your technical proficiency and suitability for the role.


Managerial Interview: The hiring manager will engage with you in multiple discussions, lasting anywhere from 30 minutes to an hour, to assess your technical skills, hands-on experience, leadership potential, and communication abilities.


HR Discussion: During this 30-minute session, you'll have the opportunity to discuss the offer and next steps with a member of the HR team.


At Mactores, we are committed to providing equal opportunities in all of our employment practices, and we do not discriminate based on race, religion, gender, national origin, age, disability, marital status, military status, genetic information, or any other category protected by federal, state, and local laws. This policy extends to all aspects of the employment relationship, including recruitment, compensation, promotions, transfers, disciplinary action, layoff, training, and social and recreational programs. All employment decisions will be made in compliance with these principles.


Read more
Think n Solutions

at Think n Solutions

2 recruiters
TnS HR
Posted by TnS HR
Remote only
1.5 - 6 yrs
Best in industry
Microservices
RESTful APIs
Microsoft SQL Server
SQL Server Integration Services (SSIS)
SQL Server Reporting Services (SSRS)
+10 more

*Apply only if you are serving Notice Period


HIRING SQL Developers
with max 20 days Of NOTICE PERIOD


Job ID: TNS2023DB01

Who Should apply?

  • Only for Serious job seekers who are ready to work in night shift
  • Technically Strong Candidates who are willing to take up challenging roles and want to raise their Career graph
  • No DBAs & BI Developers, please

 

Why Think n Solutions Software?

  • Exposure to the latest technology
  • Opportunity to work on different platforms
  • Rapid Career Growth
  • Friendly Knowledge-Sharing Environment

 

Criteria:

  • BE/MTech/MCA/MSc
  • 2yrs Hands Experience in MS SQL / NOSQL
  • Immediate joiners preferred/ Maximum notice period between 15 to 20 days
  • Candidates will be selected based on logical/technical and scenario-based testing
  • Work time - 10:00 pm to 6:00 am

 

Note: Candidates who have attended the interview process with TnS in the last 6 months will not be eligible.

 

 

Job Description:

 

  1. Technical Skills Desired:
    1. Experience in MS SQL Server, and one of these Relational DB’s, PostgreSQL / AWS Aurora DB / MySQL / any of NoSQL DBs (MongoDB / DynamoDB / DocumentDB) in an application development environment and eagerness to switch
    2. Design database tables, views, indexes
    3. Write functions and procedures for Middle Tier Development Team
    4. Work with any front-end developers in completing the database modules end to end (hands-on experience in the parsing of JSON & XML in Stored Procedures would be an added advantage).
    5. Query Optimization for performance improvement
    6. Design & develop SSIS Packages or any other Transformation tools for ETL

 

  1. Functional Skills Desired:
    1. The banking / Insurance / Retail domain would be a
    2. Interaction with a client a

 

3.       Good to Have Skills:

  1. Knowledge in a Cloud Platform (AWS / Azure)
  2. Knowledge on version control system (SVN / Git)
  3. Exposure to Quality and Process Management
  4. Knowledge in Agile Methodology

 

  1. Soft skills: (additional)
    1. Team building (attitude to train, work along, and mentor juniors)
    2. Communication skills (all kinds)
    3. Quality consciousness
    4. Analytical acumen to all business requirements
    5. Think out-of-box for business solution
Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
Komal S
Posted by Komal S
Remote only
4 - 10 yrs
₹10L - ₹35L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+2 more

Enterprise Minds, with core focus on engineering products, automation and intelligence, partners customers on the trajectory towards increasing outcomes, relevance, and growth. 

Harnessing the power of Data and the forces that define AI, Machine Learning and Data Science, we believe in institutionalizing go-to-market models and not just explore possibilities. 

We believe in a customer-centric ethic without and people-centric paradigm within. With a strong sense of community, ownership, and collaboration our people work in a spirit of co-creation, co-innovation, and co-development to engineer next-generation software products with the help of accelerators. 

Through Communities we connect and attract talent that shares skills and expertise. Through Innovation Labs and global design studios we deliver creative solutions. 
We create vertical isolated pods which has narrow but deep focus. We also create horizontal pods to collaborate and deliver sustainable outcomes. 

We follow Agile methodologies to fail fast and deliver scalable and modular solutions. We are constantly self-asses and realign to work with each customer in the most impactful manner. 

Pre-requisites for the Role 

 

  1. Job ID-EMBD0120PS 
  1. Primary skill: GCP DATA ENGINEER, BIGQUERY, ETL 
  1. Secondary skill: HADOOP, PYTHON, SPARK 
  1. Years of Experience: 5-8Years  
  1. Location: Remote 

 

Budget- Open  

NP- Immediate 

 

 

GCP DATA ENGINEER 

Position description 

  • Designing and implementing software systems 
  • Creating systems for collecting data and for processing that data 
  • Using Extract Transform Load operations (the ETL process) 
  • Creating data architectures that meet the requirements of the business 
  • Researching new methods of obtaining valuable data and improving its quality 
  • Creating structured data solutions using various programming languages and tools 
  • Mining data from multiple areas to construct efficient business models 
  • Collaborating with data analysts, data scientists, and other teams. 

Candidate profile 

  • Bachelor’s or master’s degree in information systems/engineering, computer science and management or related. 
  • 5-8 years professional experience as Big Data Engineer 
  • Proficiency in modelling and maintaining Data Lakes with PySpark – preferred basis. 
  • Experience with Big Data technologies (e.g., Databricks) 
  • Ability to model and optimize workflows GCP. 
  • Experience with Streaming Analytics services (e.g., Kafka, Grafana) 
  • Analytical, innovative and solution-oriented mindset 
  • Teamwork, strong communication and interpersonal skills 
  • Rigor and organizational skills 
  • Fluency in English (spoken and written). 
Read more
EnterpriseMinds

at EnterpriseMinds

2 recruiters
Rani Galipalli
Posted by Rani Galipalli
Bengaluru (Bangalore), Pune, Mumbai
6 - 8 yrs
₹25L - ₹28L / yr
ETL
Informatica
Data Warehouse (DWH)
ETL management
SQL
+1 more

Your key responsibilities

 

  • Create and maintain optimal data pipeline architecture. Should have experience in building batch/real-time ETL Data Pipelines. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • The individual will be responsible for solution design, integration, data sourcing, transformation, database design and implementation of complex data warehousing solutions.
  • Responsible for development, support, maintenance, and implementation of a complex project module
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support.
  • complete reporting solutions.
  • Preparation of HLD about architecture of the application and high level design.
  • Preparation of LLD about job design, job description and in detail information of the jobs.
  • Preparation of Unit Test cases and execution of the same.
  • Provide technical guidance and mentoring to application development teams throughout all the phases of the software development life cycle

Skills and attributes for success

 

  • Strong experience in SQL. Proficient in writing performant SQL working with large data volumes. Proficiency in writing and debugging complex SQLs.
  • Strong experience in database system Microsoft Azure. Experienced in Azure Data Factory.
  • Strong in Data Warehousing concepts. Experience with large-scale data warehousing architecture and data modelling.
  • Should have enough experience to work on Power Shell Scripting
  • Able to guide the team through the development, testing and implementation stages and review the completed work effectively
  • Able to make quick decisions and solve technical problems to provide an efficient environment for project implementation
  • Primary owner of delivery, timelines. Review code was written by other engineers.
  • Maintain highest levels of development practices including technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability
  • Must have understanding of business intelligence development in the IT industry
  • Outstanding written and verbal communication skills
  • Should be adept in SDLC process - requirement analysis, time estimation, design, development, testing and maintenance
  • Hands-on experience in installing, configuring, operating, and monitoring CI/CD pipeline tools
  • Should be able to orchestrate and automate pipeline
  • Good to have : Knowledge of distributed systems such as Hadoop, Hive, Spark

 

To qualify for the role, you must have

 

  • Bachelor's Degree in Computer Science, Economics, Engineering, IT, Mathematics, or related field preferred
  • More than 6 years of experience in ETL development projects
  • Proven experience in delivering effective technical ETL strategies
  • Microsoft Azure project experience
  • Technologies: ETL- ADF, SQL, Azure components (must-have), Python (nice to have)

 

Ideally, you’ll also have

Read more
Archwell
Agency job
via AVI Consulting LLP by Sravanthi Puppala
Mysore
2 - 8 yrs
₹1L - ₹15L / yr
Snowflake
Python
SQL
Amazon Web Services (AWS)
Windows Azure
+6 more

Title:  Data Engineer – Snowflake

 

Location: Mysore (Hybrid model)

Exp-2-8 yrs

Type: Full Time

Walk-in date: 25th Jan 2023 @Mysore 

 

Job Role: We are looking for an experienced Snowflake developer to join our team as a Data Engineer who will work as part of a team to help design and develop data-driven solutions that deliver insights to the business. The ideal candidate is a data pipeline builder and data wrangler who enjoys building data-driven systems that drive analytical solutions and building them from the ground up. You will be responsible for building and optimizing our data as well as building automated processes for production jobs. You will support our software developers, database architects, data analysts and data scientists on data initiatives

 

Key Roles & Responsibilities:

  • Use advanced complex Snowflake/Python and SQL to extract data from source systems for ingestion into a data pipeline.
  • Design, develop and deploy scalable and efficient data pipelines.
  • Analyze and assemble large, complex datasets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements. For example: automating manual processes, optimizing data delivery, re-designing data platform infrastructure for greater scalability.
  • Build required infrastructure for optimal extraction, loading, and transformation (ELT) of data from various data sources using AWS and Snowflake leveraging Python or SQL technologies.
  • Monitor cloud-based systems and components for availability, performance, reliability, security and efficiency
  • Create and configure appropriate cloud resources to meet the needs of the end users.
  • As needed, document topology, processes, and solution architecture.
  • Share your passion for staying on top of tech trends, experimenting with and learning new technologies

 

Qualifications & Experience

Qualification & Experience Requirements:

  • Bachelor's degree in computer science, computer engineering, or a related field.
  • 2-8 years of experience working with Snowflake
  • 2+ years of experience with the AWS services.
  • Candidate should able to write the stored procedure and function in Snowflake.
  • At least 2 years’ experience in snowflake developer.
  • Strong SQL Knowledge.
  • Data injection in snowflake using Snowflake procedure.
  • ETL Experience is Must (Could be any tool)
  • Candidate should be aware of snowflake architecture.
  • Worked on the Migration project
  • DW Concept (Optional)
  • Experience with cloud data storage and compute components including lambda functions, EC2s, containers.
  • Experience with data pipeline and workflow management tools: Airflow, etc.
  • Experience cleaning, testing, and evaluating data quality from a wide variety of ingestible data sources
  • Experience working with Linux and UNIX environments.
  • Experience with profiling data, with and without data definition documentation
  • Familiar with Git
  • Familiar with issue tracking systems like JIRA (Project Management Tool) or Trello.
  • Experience working in an agile environment.

Desired Skills:

  • Experience in Snowflake. Must be willing to be Snowflake certified in the first 3 months of employment.
  • Experience with a stream-processing system: Snowpipe
  • Working knowledge of AWS or Azure
  • Experience in migrating from on-prem to cloud systems
Read more
AArete Technosoft Pvt Ltd
Pune
7 - 12 yrs
₹25L - ₹30L / yr
Snowflake
Snow flake schema
ETL
Data Warehouse (DWH)
Python
+8 more
Help us modernize our data platforms, with a specific focus on Snowflake
• Work with various stakeholders, understand requirements, and build solutions/data pipelines
that address the needs at scale
• Bring key workloads to the clients’ Snowflake environment using scalable, reusable data
ingestion and processing frameworks to transform a variety of datasets
• Apply best practices for Snowflake architecture, ELT and data models
Skills - 50% of below:
• A passion for all things data; understanding how to work with it at scale, and more importantly,
knowing how to get the most out of it
• Good understanding of native Snowflake capabilities like data ingestion, data sharing, zero-copy
cloning, tasks, Snowpipe etc
• Expertise in data modeling, with a good understanding of modeling approaches like Star
schema and/or Data Vault
• Experience in automating deployments
• Experience writing code in Python, Scala or Java or PHP
• Experience in ETL/ELT either via a code-first approach or using low-code tools like AWS Glue,
Appflow, Informatica, Talend, Matillion, Fivetran etc
• Experience in one or more of the AWS especially in relation to integration with Snowflake
• Familiarity with data visualization tools like Tableau or PowerBI or Domo or any similar tool
• Experience with Data Virtualization tools like Trino, Starburst, Denodo, Data Virtuality, Dremio
etc.
• Certified SnowPro Advanced: Data Engineer is a must.
Read more
Think n Solutions

at Think n Solutions

2 recruiters
TnS HR
Posted by TnS HR
Bengaluru (Bangalore)
2 - 12 yrs
Best in industry
Microsoft SQL Server
SQL Server Integration Services (SSIS)
SQL Server Reporting Services (SSRS)
Amazon Web Services (AWS)
SQL Azure
+9 more

Criteria:

  • BE/MTech/MCA/MSc
  • 3+yrs Hands on Experience in TSQL / PL SQL / PG SQL or NOSQL
  • Immediate joiners preferred*
  • Candidates will be selected based on logical/technical and scenario-based testing

 

Note: Candidates who have attended the interview process with TnS in the last 6 months will not be eligible.

 

Job Description:

 

  1. Technical Skills Desired:
    1. Experience in MS SQL Server and one of these Relational DB’s, PostgreSQL / AWS Aurora DB / MySQL / Oracle / NOSQL DBs (MongoDB / DynamoDB / DocumentDB) in an application development environment and eagerness to switch
    2. Design database tables, views, indexes
    3. Write functions and procedures for Middle Tier Development Team
    4. Work with any front-end developers in completing the database modules end to end (hands-on experience in parsing of JSON & XML in Stored Procedures would be an added advantage).
    5. Query Optimization for performance improvement
    6. Design & develop SSIS Packages or any other Transformation tools for ETL

 

  1. Functional Skills Desired:
    1. Banking / Insurance / Retail domain would be a
    2. Interaction with a client a

3.      Good to Have Skills:

  1. Knowledge in a Cloud Platform (AWS / Azure)
  2. Knowledge on version control system (SVN / Git)
  3. Exposure to Quality and Process Management
  4. Knowledge in Agile Methodology

 

  1. Soft skills: (additional)
    1. Team building (attitude to train, work along, mentor juniors)
    2. Communication skills (all kinds)
    3. Quality consciousness
    4. Analytical acumen to all business requirements

Think out-of-box for business solution
Read more
vThink Global Technologies
Balasubramanian Ramaiyar
Posted by Balasubramanian Ramaiyar
Chennai
4 - 7 yrs
₹8L - ₹15L / yr
SQL
ETL
Informatica
Data Warehouse (DWH)
Stored Procedures
+1 more
We are looking for a strong SQL Developer well versed and hands-on in SQL, Stored Procedures, Joins and ETL. A data-savvy individual with advanced SQL skills.
Read more
Remote only
12 - 20 yrs
₹24L - ₹40L / yr
API management
Windows Azure
Spring Boot
Microservices
Cloud Computing
+4 more

API Lead Developer

 

Job Overview:

As an API developer for a very large client,  you will be filling the role of a hands-on Azure API Developer. we are looking for someone who has the necessary technical expertise to build and maintain sustainable API Solutions to support identified needs and expectations from the client.

 

Delivery Responsibilities

  • Implement an API architecture using Azure API Management, including security, API Gateway, Analytics, and API Services
  • Design reusable assets, components, standards, frameworks, and processes to support and facilitate API and integration projects
  • Conduct functional, regression, and load testing on API’s
  • Gather requirements and defining the strategy for application integration
  • Develop using the following types of Integration protocols/principles: SOAP and Web services stack, REST APIs, RESTful, RPC/RFC
  • Analyze, design, and coordinate the development of major components of the APIs including hands on implementation, testing, review, build automation, and documentation
  • Work with DevOps team to package release components to deploy into higher environment

Required Qualifications

  • Expert Hands-on experience in the following:
    • Technologies such as Spring Boot, Microservices, API Management & Gateway, Event Streaming, Cloud-Native Patterns, Observability & Performance optimizations
    • Data modelling, Master and Operational Data Stores, Data ingestion & distribution patterns, ETL / ELT technologies, Relational and Non-Relational DB's, DB Optimization patterns
  • At least 5+ years of experience with Azure APIM
  • At least 8+ years’ experience in Azure SaaS and PaaS
  • At least 8+ years’ experience in API Management including technologies such as Mulesoft and Apigee
  • At least last 5 years in consulting with the latest implementation on Azure SaaS services
  • At least 5+ years in MS SQL / MySQL development including data modeling, concurrency, stored procedure development and tuning
  • Excellent communication skills with a demonstrated ability to engage, influence, and encourage partners and stakeholders to drive collaboration and alignment
  • High degree of organization, individual initiative, results and solution oriented, and personal accountability and resiliency
  • Should be a self-starter and team player, capable of working with a team of architects, co-developers, and business analysts

 

Preferred Qualifications:

  • Ability to work as a collaborative team, mentoring and training the junior team members
  • Working knowledge on building and working on/around data integration / engineering / Orchestration
  • Position requires expert knowledge across multiple platforms, integration patterns, processes, data/domain models, and architectures.
  • Candidates must demonstrate an understanding of the following disciplines: enterprise architecture, business architecture, information architecture, application architecture, and integration architecture.
  • Ability to focus on business solutions and understand how to achieve them according to the given timeframes and resources.
  • Recognized as an expert/thought leader. Anticipates and solves highly complex problems with a broad impact on a business area.
  • Experience with Agile Methodology / Scaled Agile Framework (SAFe).
  • Outstanding oral and written communication skills including formal presentations for all levels of management combined with strong collaboration/influencing.

 

Preferred Education/Skills:

  • Prefer Master’s degree
  • Bachelor’s Degree in Computer Science with a minimum of 12+ years relevant experience or equivalent.
Read more
Talent500
Agency job
via Talent500 by ANSR by Raghu R
Bengaluru (Bangalore)
1 - 10 yrs
₹5L - ₹30L / yr
Python
ETL
SQL
SQL Server Reporting Services (SSRS)
Data Warehouse (DWH)
+6 more

A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.

Responsibilities:

- Design, Create and maintain on premise and cloud based data integration pipelines. 
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects

 

Technical & Business Expertise:

-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP) 
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)

Read more
IntelliFlow Solutions Pvt Ltd

at IntelliFlow Solutions Pvt Ltd

2 candid answers
Divyashree Abhilash
Posted by Divyashree Abhilash
Remote, Bengaluru (Bangalore)
3 - 8 yrs
₹10L - ₹12L / yr
DevOps
Kubernetes
Docker
Amazon Web Services (AWS)
Windows Azure
+3 more
IntelliFlow.ai is a next-gen technology SaaS Platform company providing tools for companies to design, build and deploy enterprise applications with speed and scale. It innovates and simplifies the application development process through its flagship product, IntelliFlow. It allows business engineers and developers to build enterprise-grade applications to run frictionless operations through rapid development and process automation. IntelliFlow is a low-code platform to make business better with faster time-to-market and succeed sooner.

Looking for an experienced candidate with strong development and programming experience, knowledge preferred-

  • Cloud computing (i.e. Kubernetes, AWS, Google Cloud, Azure)
  • Coming from a strong development background and has programming experience with Java and/or NodeJS (other programming languages such as Groovy/python are a big bonus)
  • Proficient with Unix systems and bash
  • Proficient with git/GitHub/GitLab/bitbucket

 

Desired skills-

  • Docker
  • Kubernetes
  • Jenkins
  • Experience in any scripting language (Phyton, Shell Scripting, Java Script)
  • NGINX / Load Balancer
  • Splunk / ETL tools
Read more
British Telecom
Agency job
via posterity consulting by Kapil Tiwari
Bengaluru (Bangalore)
3 - 7 yrs
₹8L - ₹14L / yr
Data engineering
Big Data
Google Cloud Platform (GCP)
ETL
Datawarehousing
+6 more
You'll have the following skills & experience:

• Problem Solving:. Resolving production issues to fix service P1-4 issues. Problems relating to
introducing new technology, and resolving major issues in the platform and/or service.
• Software Development Concepts: Understands and is experienced with the use of a wide range of
programming concepts and is also aware of and has applied a range of algorithms.
• Commercial & Risk Awareness: Able to understand & evaluate both obvious and subtle commercial
risks, especially in relation to a programme.
Experience you would be expected to have
• Cloud: experience with one of the following cloud vendors: AWS, Azure or GCP
• GCP : Experience prefered, but learning essential.
• Big Data: Experience with Big Data methodology and technologies
• Programming : Python or Java worked with Data (ETL)
• DevOps: Understand how to work in a Dev Ops and agile way / Versioning / Automation / Defect
Management – Mandatory
• Agile methodology - knowledge of Jira
Read more
Humancloud Technology Pvt Ltd
yash gurav
Posted by yash gurav
Pune
9 - 12 yrs
₹12L - ₹17L / yr
Project Management
IT project management
Software project management
Problem solving
Client Management
+12 more
About us :
Humancloud Technologies is a leading digital technology and innovation partner
transforming businesses across the globe through its services and solutions.
We believe in helping our businesses stay ahead of the curve by enabling them to
leverage the new-age technology services of Blockchain, IoT, Cloud and Experience
Design. We, at Humancloud, have nurtured ideas from validation to production
and shaped them into scalable businesses.
An experienced IIT Delhi alumni leadership coupled with a team of talented and
supportive peers look forward to your onboarding.

 Roles and Responsibilities:

You will be the leader of a team of high-performing individuals who own the entire product lifecycle from strategy to evaluation.
Guide the team in achieving both individual and collective goals.
Create and manage a process that drives toward a scalable product portfolio that will in turn drive the organization’s product profitability.
Guide the team to transform the product ideas into actionable concepts aligned with the business objectives within scope and budget.
Brainstorming for new initiatives and roadmaps, keeping an eye on the market, the customers, new opportunities and the business cases for new product opportunities.
Develop and monitor data-driven analytics for project reporting and status checks.
Prepare and maintain the RAID register and include it in planning as appropriate.
Keep all the stakeholders updated on project progress and risks if any
Initiate continuous improvements (CI) within project teams.
As and when necessary, deep dive with code reviews and RCA.

Desired Qualification and Experience :

Able to manage projects in technologies such as Angular, JAVA, J2EE, Microservices .NET, SQL Server, Microsoft SSIS, SSRS, and ETL with hands-on in one or more technologies.
Experience in automation and system integration.
Understand products and working of the commerce ecosystem in depth, have a good understanding of funnels, sprint & agile ways of working, PRDs & wireframes.
Align with Agile methodology and collaborate with the development team and business users.

Able to demonstrate & present the necessary reports and achievements.
Are comfortable with ambiguity, believe in first principles and have the skill to transform broad ideas into action plans
Engage with global stakeholders from a project delivery perspective and drive the project suavely.
Establish relevant policies, processes and procedures, and monitor compliance.


Why Join Us:
Are you inquisitive? Are you someone who believes in facing the odds with the
determination to drive ideas forward? Then, Humancloud Technologies is the place
for you.
At Humancloud Technologies, we are committed to fostering a culture of innovation,
integrity, passion, courage and empathy. We believe in human potential and the
numerous ways it can serve humanity through adopting technology. If you share a
similar belief, then we welcome you to join us.
For Further Information:
Visit our website: http://www.humancloud.co.in/" target="_blank">www.humancloud.co.in

Follow us on social media: LinkedIn          
Read more
Rishabh Software
Baiju Sukumaran
Posted by Baiju Sukumaran
Remote only
7 - 9 yrs
Best in industry
ETL
Informatica
Data Warehouse (DWH)
T-SQL
MS SQLServer
+4 more

Requirements

 

  • Strong SQL coding skills including index builds, performance tuning, triggers, functions, and stored procedures.
  • Good knowledge of SSIS development, T-SQL, Views, and CTEs.
  • Good knowledge of SSIS, Azure Data Factory, Synapse, Azure Data Lake Gen 2
  • Excellent understanding of relational databases
  • Proven experience manipulating large data sets e.g., millions of records
  • Good knowledge of data warehousing
  • Well-versed with ETL, ELT
  • Good working knowledge of MS SQL Server with ability to take and restore dumps, perform admin tasks, etc.
  • Good understanding and experience in software design principles, SOLID principles, and design patterns
  • Good to have experience working with Microsoft Azure SQL Server and Azure Data Warehouse environment
  • Ability to communicate and work well in an agile-based environment.
  • Able to work as an individual contributor and be a good team player.
  • Familiarity and working experience with SDLC processes
  • Possess problem-analysis and problem-solving skills
  • Excellent written and verbal communication skills
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort