Cutshort logo
EnterpriseMinds logo
ETL Developer – EMEL0120PS
ETL Developer – EMEL0120PS
EnterpriseMinds's logo

ETL Developer – EMEL0120PS

Rani Galipalli's profile picture
Posted by Rani Galipalli
6 - 8 yrs
₹25L - ₹28L / yr
Bengaluru (Bangalore), Pune, Mumbai
Skills
Data Warehouse (DWH)
Informatica
ETL
ETL management
SQL
Azure Data Factory

Your key responsibilities

 

  • Create and maintain optimal data pipeline architecture. Should have experience in building batch/real-time ETL Data Pipelines. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • The individual will be responsible for solution design, integration, data sourcing, transformation, database design and implementation of complex data warehousing solutions.
  • Responsible for development, support, maintenance, and implementation of a complex project module
  • Provide expertise in area and advanced knowledge of applications programming and ensure application design adheres to the overall architecture blueprint
  • Utilize advanced knowledge of system flow and develop standards for coding, testing, debugging, and implementation
  • Resolve variety of high impact problems/projects through in-depth evaluation of complex business processes, system processes, and industry standards
  • Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support.
  • complete reporting solutions.
  • Preparation of HLD about architecture of the application and high level design.
  • Preparation of LLD about job design, job description and in detail information of the jobs.
  • Preparation of Unit Test cases and execution of the same.
  • Provide technical guidance and mentoring to application development teams throughout all the phases of the software development life cycle

Skills and attributes for success

 

  • Strong experience in SQL. Proficient in writing performant SQL working with large data volumes. Proficiency in writing and debugging complex SQLs.
  • Strong experience in database system Microsoft Azure. Experienced in Azure Data Factory.
  • Strong in Data Warehousing concepts. Experience with large-scale data warehousing architecture and data modelling.
  • Should have enough experience to work on Power Shell Scripting
  • Able to guide the team through the development, testing and implementation stages and review the completed work effectively
  • Able to make quick decisions and solve technical problems to provide an efficient environment for project implementation
  • Primary owner of delivery, timelines. Review code was written by other engineers.
  • Maintain highest levels of development practices including technical design, solution development, systems configuration, test documentation/execution, issue identification and resolution, writing clean, modular and self-sustaining code, with repeatable quality and predictability
  • Must have understanding of business intelligence development in the IT industry
  • Outstanding written and verbal communication skills
  • Should be adept in SDLC process - requirement analysis, time estimation, design, development, testing and maintenance
  • Hands-on experience in installing, configuring, operating, and monitoring CI/CD pipeline tools
  • Should be able to orchestrate and automate pipeline
  • Good to have : Knowledge of distributed systems such as Hadoop, Hive, Spark

 

To qualify for the role, you must have

 

  • Bachelor's Degree in Computer Science, Economics, Engineering, IT, Mathematics, or related field preferred
  • More than 6 years of experience in ETL development projects
  • Proven experience in delivering effective technical ETL strategies
  • Microsoft Azure project experience
  • Technologies: ETL- ADF, SQL, Azure components (must-have), Python (nice to have)

 

Ideally, you’ll also have

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About EnterpriseMinds

Founded :
2017
Type
Size :
100-1000
Stage :
Profitable
About

Enterprise Minds, with core focus on engineering products, automation and intelligence, partners customers on the trajectory towards increasing outcomes, relevance and growth.


Harnessing the power of Data and the forces that define AI, Machine Learning and Data Science, we believe in institutionalising go-to-market models and not just explore possibilities.


We believe in a customer-centric ethic without and people-centric paradigm within. With a strong sense of community, ownership and collaboration our people work in a spirit of co-creation, co-innovation and co-development to engineer next-generation software products with the help of accelerators.


Through Communities we connect and attract talent that shares skills and expertise. Through Innovation Labs and global design studios we deliver creative solutions.


We create vertical isolated pods which has narrow but deep focus. We also create horizontal pods to collaborate and deliver sustainable outcomes.


We follow Agile methodologies to fail fast and deliver scalable and modular solutions. We constantly self-asses and realign to work with each customer in the most impactful manner.

Read more
Photos
Company featured pictures
Connect with the team
Profile picture
Nikita Aher
Profile picture
phani kalyan
Company social profiles
N/A

Similar jobs

globe teleservices
deepshikha thapar
Posted by deepshikha thapar
Bengaluru (Bangalore)
4 - 8 yrs
₹10L - ₹15L / yr
skill iconPython
SQL

RESPONSIBILITIES:

 Requirement understanding and elicitation, analyze, data/workflows, contribute to product

project and Proof of concept (POC)

 Contribute to prepare design documents and effort estimations.

 Develop AI/ML Models using best in-class ML models.

 Building, testing, and deploying AI/ML solutions.

 Work with Business Analysts and Product Managers to assist with defining functional user

stories.

 Ensure deliverables across teams are of high quality and clearly documented. 

 Recommend best ML practices/Industry standards for any ML use case.

 Proactively take up R and D and recommend solution options for any ML use case.

REQUIREMENTS:

Required Skills

 Overall experience of 4 to 7 Years working on AI/ML framework development

 Good programming knowledge in Python is must.

 Good Knowledge of R and SAS is desired.

 Good hands on and working knowledge SQL, Data Model, CRISP-DM.

 Proficiency with Uni/multivariate statistics, algorithm design, and predictive AI/ML modelling.

 Strong knowledge of machine learning algorithms, linear regression, logistic regression, KNN,

Random Forest, Support Vector Machines and Natural Language Processing.

 Experience with NLP and deep neural networks using synthetic and artificial data.

 Involved in different phases of SDLC and have good working exposure on different SLDC’s like

Agile Methodologies.

Read more
Bengaluru (Bangalore)
3 - 8 yrs
₹20L - ₹35L / yr
SQL
skill iconPython
Metrics management
skill iconData Analytics

Responsibilities

  • Work with large and complex blockchain data sets and derive investment relevant metrics in close partnership with financial analysts and blockchain engineers.
  • Apply knowledge of statistics, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to the development of fundamental metrics needed to evaluate various crypto assets.
  • Build a strong understanding of existing metrics used to value various decentralized applications and protocols.
  • Build customer facing metrics and dashboards.
  • Work closely with analysts, engineers, Product Managers and provide feedback as we develop our data analytics and research platform.

Qualifications

  • Bachelor's degree in Mathematics, Statistics, a relevant technical field, or equivalent practical experience (or) degree in an analytical field (e.g. Computer Science, Engineering, Mathematics, Statistics, Operations Research, Management Science)
  • 3+ years experience with data analysis and metrics development
  • 3+ years experience analyzing and interpreting data, drawing conclusions, defining recommended actions, and reporting results across stakeholders
  • 2+ years experience writing SQL queries
  • 2+ years experience scripting in Python
  • Demonstrated curiosity in and excitement for Web3/blockchain technologies
Read more
Branch International
at Branch International
4 candid answers
1 video
Reshika Mendiratta
Posted by Reshika Mendiratta
Remote only
7yrs+
₹50L - ₹70L / yr
Data Structures
Algorithms
Object Oriented Programming (OOPs)
ETL
ETL architecture
+5 more

Branch Overview


Imagine a world where every person has improved access to financial services. People could start new businesses, pay for their children’s education, cover their emergency medical bills – the possibilities to improve life are endless. 


Branch is a global technology company revolutionizing financial access for millions of underserved banking customers today across Africa and India. By leveraging the rapid adoption of smartphones, machine learning and other technology, Branch is pioneering new ways to improve access and value for those overlooked by banks. From instant loans to market-leading investment yields, Branch offers a variety of products that help our customers be financially empowered.


Branch’s mission-driven team is led by the co-founders of Kiva.org and one of the earliest product leaders of PayPal. Branch has raised over $100 million from leading Silicon Valley investors, including Andreessen Horowitz (a16z) and Visa. 

 

With over 32 million downloads, Branch is one of the most popular finance apps in the world.

 

Job Overview

Branch launched in India in early 2019 and has seen rapid adoption and growth. In 2020 we started building out a full Engineering team in India to accelerate our success here. This team is working closely with our engineering team (based in the United States, Nigeria, and Kenya) to strengthen the capabilities of our existing product and build out new product lines for the company.


You will work closely with our Product and Data Science teams to design and maintain multiple technologies, including our API backend, credit scoring and underwriting systems, payments integrations, and operations tools. We face numerous interesting technical challenges ranging from maintaining complex financial systems to accessing and processing creative data sources for our algorithmic credit model. 


As a company, we are passionate about our customers, fearless in the face of barriers, and driven by data. As an engineering team, we value bottom-up innovation and decentralized decision-making: We believe the best ideas can come from anyone in the company, and we are working hard to create an environment where everyone feels empowered to propose solutions to the challenges we face. We are looking for individuals who thrive in a fast-moving, innovative, and customer-focused setting.


Responsibilities

  • Make significant contributions to Branch’s data platform including data models, transformations, warehousing, and BI systems by bringing in best practices.
  • Build customer facing and internal products and APIs with industry best practices around security and performance in mind.
  • Influence and shape the company’s technical and product roadmap by providing timely and accurate inputs and owning various outcomes.
  • Collaborate with peers in other functional areas (Machine Learning, DevOps, etc.) to identify potential growth areas and systems needed.
  • Guide and mentor other younger engineers around you.
  • Scale our systems to ever-growing levels of traffic and handle complexity.


Qualifications

  • You have strong experience (8+ years) of designing, coding, and shipping data and backend software for web-based or mobile products.
  • Experience coordinating and collaborating with various business stakeholders and company leadership on critical functional decisions and technical roadmap.
  • You have strong knowledge of software development fundamentals, including relevant background in computer science fundamentals, distributed systems, data storage and processing, and agile development methodologies.
  • Have experience designing maintainable and scalable data architecture for ETL and BI purposes.
  • You are able to utlize your knowledge and expertise to code and ship quality products in a timely manner.
  • You are pragmatic and combine a strong understanding of technology and product needs to arrive at the best solution for a given problem.
  • You are highly entrepreneurial and thrive in taking ownership of your own impact. You take the initiative to solve problems before they arise.
  • You are an excellent collaborator & communicator. You know that startups are a team sport. You listen to others, aren’t afraid to speak your mind and always try to ask the right questions. 
  • You are excited by the prospect of working in a distributed team and company, working with teammates from all over the world.

Benefits of Joining

  • Mission-driven, fast-paced and entrepreneurial environment
  • Competitive salary and equity package
  • A collaborative and flat company culture
  • Remote first, with the option to work in-person occasionally
  • Fully-paid Group Medical Insurance and Personal Accidental Insurance
  • Unlimited paid time off including personal leave, bereavement leave, sick leave
  • Fully paid parental leave - 6 months maternity leave and 3 months paternity leave
  • Monthly WFH stipend alongside a one time home office set-up budget
  • $500 Annual professional development budget 
  • Discretionary trips to our offices across the globe, with global travel medical insurance 
  • Team meals and social events- Virtual and In-person

Branch International is an Equal Opportunity Employer. The company does not and will not discriminate in employment on any basis prohibited by applicable law. We’re looking for more than just qualifications -- so if you’re unsure that you meet the criteria, please do not hesitate to apply!

 

Read more
Cloth software company
Agency job
via Jobdost by Sathish Kumar
Delhi
1 - 3 yrs
₹1L - ₹6L / yr
SQL
skill iconData Analytics

What you will do:

  • Understand the process of CaaStle business teams, KPIs, and pain points
  • Build scalable data products, self-service tools, data cubes to analyze and present data associated with acquisition, retention, product performance, operations, client services, etc.
  • Closely partner with data engineering, product, and business teams and participate in requirements capture, research design, data collection, dashboard generation, and translation of results into actionable insights that can add value for business stakeholders
  • Leverage advanced analytics to drive key success metrics for business and revenue generation
  • Operationalize, implement, and automate changes to drive data-driven decisions
  • Attend and play an active role in answering questions from the executive and/or business teams through data mining and analysis

We would love for you to have:

  • Education: Advanced degree in Computer Science, Statistics, Mathematics, Engineering, Economics, Business Analytics or related field is required
  • Experience: 2-4 years of professional experience
  • Proficiency in data visualization/reporting tools (i.e. Tableau, Qlikview, etc.)
  • Experience in A/B testing and measure performance of experiments
  • Strong proficiency with SQL-based languages. Experience with large scale data analytics technologies (i.e., Hadoop and Spark)
  • Strong analytical skills and business mindset with the ability to translate complex concepts and analysis into clear and concise takeaways to drive insights and strategies
  • Excellent communication, social, and presentation skills with meticulous attention to detail
  • Programming experience in Python, R, or other languages
  • Knowledge of Data mining, statistical modeling approaches, and techniques

 

CaaStle is committed to equality of opportunity in employment. It has been and will continue to be the policy of CaaStle to provide full and equal employment opportunities to all employees and candidates for employment without regard to race, color, religion, national or ethnic origin, veteran status, age, sexual orientation, gender identity, or physical or mental disability. This policy applies to all terms, conditions and privileges of employment, such as those pertaining to training, transfer, promotion, compensation and recreational programs.

Read more
xpressbees
Alfiya Khan
Posted by Alfiya Khan
Pune, Bengaluru (Bangalore)
6 - 8 yrs
₹15L - ₹25L / yr
Big Data
Data Warehouse (DWH)
Data modeling
Apache Spark
Data integration
+10 more
Company Profile
XpressBees – a logistics company started in 2015 – is amongst the fastest growing
companies of its sector. While we started off rather humbly in the space of
ecommerce B2C logistics, the last 5 years have seen us steadily progress towards
expanding our presence. Our vision to evolve into a strong full-service logistics
organization reflects itself in our new lines of business like 3PL, B2B Xpress and cross
border operations. Our strong domain expertise and constant focus on meaningful
innovation have helped us rapidly evolve as the most trusted logistics partner of
India. We have progressively carved our way towards best-in-class technology
platforms, an extensive network reach, and a seamless last mile management
system. While on this aggressive growth path, we seek to become the one-stop-shop
for end-to-end logistics solutions. Our big focus areas for the very near future
include strengthening our presence as service providers of choice and leveraging the
power of technology to improve efficiencies for our clients.

Job Profile
As a Lead Data Engineer in the Data Platform Team at XpressBees, you will build the data platform
and infrastructure to support high quality and agile decision-making in our supply chain and logistics
workflows.
You will define the way we collect and operationalize data (structured / unstructured), and
build production pipelines for our machine learning models, and (RT, NRT, Batch) reporting &
dashboarding requirements. As a Senior Data Engineer in the XB Data Platform Team, you will use
your experience with modern cloud and data frameworks to build products (with storage and serving
systems)
that drive optimisation and resilience in the supply chain via data visibility, intelligent decision making,
insights, anomaly detection and prediction.

What You Will Do
• Design and develop data platform and data pipelines for reporting, dashboarding and
machine learning models. These pipelines would productionize machine learning models
and integrate with agent review tools.
• Meet the data completeness, correction and freshness requirements.
• Evaluate and identify the data store and data streaming technology choices.
• Lead the design of the logical model and implement the physical model to support
business needs. Come up with logical and physical database design across platforms (MPP,
MR, Hive/PIG) which are optimal physical designs for different use cases (structured/semi
structured). Envision & implement the optimal data modelling, physical design,
performance optimization technique/approach required for the problem.
• Support your colleagues by reviewing code and designs.
• Diagnose and solve issues in our existing data pipelines and envision and build their
successors.

Qualifications & Experience relevant for the role

• A bachelor's degree in Computer Science or related field with 6 to 9 years of technology
experience.
• Knowledge of Relational and NoSQL data stores, stream processing and micro-batching to
make technology & design choices.
• Strong experience in System Integration, Application Development, ETL, Data-Platform
projects. Talented across technologies used in the enterprise space.
• Software development experience using:
• Expertise in relational and dimensional modelling
• Exposure across all the SDLC process
• Experience in cloud architecture (AWS)
• Proven track record in keeping existing technical skills and developing new ones, so that
you can make strong contributions to deep architecture discussions around systems and
applications in the cloud ( AWS).

• Characteristics of a forward thinker and self-starter that flourishes with new challenges
and adapts quickly to learning new knowledge
• Ability to work with a cross functional teams of consulting professionals across multiple
projects.
• Knack for helping an organization to understand application architectures and integration
approaches, to architect advanced cloud-based solutions, and to help launch the build-out
of those systems
• Passion for educating, training, designing, and building end-to-end systems.
Read more
Propellor.ai
at Propellor.ai
5 candid answers
1 video
Kajal Jain
Posted by Kajal Jain
Remote only
1 - 4 yrs
₹5L - ₹15L / yr
skill iconPython
SQL
Spark
Hadoop
Big Data
+2 more

Big Data Engineer/Data Engineer


What we are solving
Welcome to today’s business data world where:
• Unification of all customer data into one platform is a challenge

• Extraction is expensive
• Business users do not have the time/skill to write queries
• High dependency on tech team for written queries

These facts may look scary but there are solutions with real-time self-serve analytics:
• Fully automated data integration from any kind of a data source into a universal schema
• Analytics database that streamlines data indexing, query and analysis into a single platform.
• Start generating value from Day 1 through deep dives, root cause analysis and micro segmentation

At Propellor.ai, this is what we do.
• We help our clients reduce effort and increase effectiveness quickly
• By clearly defining the scope of Projects
• Using Dependable, scalable, future proof technology solution like Big Data Solutions and Cloud Platforms
• Engaging with Data Scientists and Data Engineers to provide End to End Solutions leading to industrialisation of Data Science Model Development and Deployment

What we have achieved so far
Since we started in 2016,
• We have worked across 9 countries with 25+ global brands and 75+ projects
• We have 50+ clients, 100+ Data Sources and 20TB+ data processed daily

Work culture at Propellor.ai
We are a small, remote team that believes in
• Working with a few, but only with highest quality team members who want to become the very best in their fields.
• With each member's belief and faith in what we are solving, we collectively see the Big Picture
• No hierarchy leads us to believe in reaching the decision maker without any hesitation so that our actions can have fruitful and aligned outcomes.
• Each one is a CEO of their domain.So, the criteria while making a choice is so our employees and clients can succeed together!

To read more about us click here:
https://bit.ly/3idXzs0" target="_blank">https://bit.ly/3idXzs0

About the role
We are building an exceptional team of Data engineers who are passionate developers and wants to push the boundaries to solve complex business problems using the latest tech stack. As a Big Data Engineer, you will work with various Technology and Business teams to deliver our Data Engineering offerings to our clients across the globe.

Role Description

• The role would involve big data pre-processing & reporting workflows including collecting, parsing, managing, analysing, and visualizing large sets of data to turn information into business insights
• Develop the software and systems needed for end-to-end execution on large projects
• Work across all phases of SDLC, and use Software Engineering principles to build scalable solutions
• Build the knowledge base required to deliver increasingly complex technology projects
• The role would also involve testing various machine learning models on Big Data and deploying learned models for ongoing scoring and prediction.

Education & Experience
• B.Tech. or Equivalent degree in CS/CE/IT/ECE/EEE 3+ years of experience designing technological solutions to complex data problems, developing & testing modular, reusable, efficient and scalable code to implement those solutions.

Must have (hands-on) experience
• Python and SQL expertise
• Distributed computing frameworks (Hadoop Ecosystem & Spark components)
• Must be proficient in any Cloud computing platforms (AWS/Azure/GCP)  • Experience in in any cloud platform would be preferred - GCP (Big Query/Bigtable, Pub sub, Data Flow, App engine )/ AWS/ Azure

• Linux environment, SQL and Shell scripting Desirable
• Statistical or machine learning DSL like R
• Distributed and low latency (streaming) application architecture
• Row store distributed DBMSs such as Cassandra, CouchDB, MongoDB, etc
. • Familiarity with API design

Hiring Process:
1. One phone screening round to gauge your interest and knowledge of fundamentals
2. An assignment to test your skills and ability to come up with solutions in a certain time
3. Interview 1 with our Data Engineer lead
4. Final Interview with our Data Engineer Lead and the Business Teams

Preferred Immediate Joiners

Read more
RedSeer Consulting
at RedSeer Consulting
2 recruiters
Raunak Swarnkar
Posted by Raunak Swarnkar
Bengaluru (Bangalore)
0 - 2 yrs
₹10L - ₹15L / yr
skill iconPython
PySpark
SQL
pandas
Cloud Computing
+2 more

BRIEF DESCRIPTION:

At-least 1 year of Python, Spark, SQL, data engineering experience

Primary Skillset: PySpark, Scala/Python/Spark, Azure Synapse, S3, RedShift/Snowflake

Relevant Experience: Legacy ETL job Migration to AWS Glue / Python & Spark combination

 

ROLE SCOPE:

Reverse engineer the existing/legacy ETL jobs

Create the workflow diagrams and review the logic diagrams with Tech Leads

Write equivalent logic in Python & Spark

Unit test the Glue jobs and certify the data loads before passing to system testing

Follow the best practices, enable appropriate audit & control mechanism

Analytically skillful, identify the root causes quickly and efficiently debug issues

Take ownership of the deliverables and support the deployments

 

REQUIREMENTS:

Create data pipelines for data integration into Cloud stacks eg. Azure Synapse

Code data processing jobs in Azure Synapse Analytics, Python, and Spark

Experience in dealing with structured, semi-structured, and unstructured data in batch and real-time environments.

Should be able to process .json, .parquet and .avro files

 

PREFERRED BACKGROUND:

Tier1/2 candidates from IIT/NIT/IIITs

However, relevant experience, learning attitude takes precedence

Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹10L - ₹20L / yr
skill iconData Science
skill iconPython
skill iconMachine Learning (ML)
skill iconDeep Learning
SQL
Work-days: Sunday through Thursday
Work shift: Day time


  •  Strong problem-solving skills with an emphasis on product development.
• Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw
insights from large data sets.
• Experience in building ML pipelines with Apache Spark, Python
• Proficiency in implementing end to end Data Science Life cycle
• Experience in Model fine-tuning and advanced grid search techniques
• Experience working with and creating data architectures.
• Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural
networks, etc.) and their real-world advantages/drawbacks.
• Knowledge of advanced statistical techniques and concepts (regression, properties of distributions,
statistical tests and proper usage, etc.) and experience with applications.
• Excellent written and verbal communication skills for coordinating across teams.
• A drive to learn and master new technologies and techniques.
• Assess the effectiveness and accuracy of new data sources and data gathering techniques.
• Develop custom data models and algorithms to apply to data sets.
• Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting, and other business outcomes.
• Develop company A/B testing framework and test model quality.
• Coordinate with different functional teams to implement models and monitor outcomes.
• Develop processes and tools to monitor and analyze model performance and data accuracy.

Key skills:
● Strong knowledge in Data Science pipelines with Python
● Object-oriented programming
● A/B testing framework and model fine-tuning
● Proficiency in using sci-kit, NumPy, and pandas package in python
Nice to have:
● Ability to work with containerized solutions: Docker/Compose/Swarm/Kubernetes
● Unit testing, Test-driven development practice
● DevOps, Continuous integration/ continuous deployment experience
● Agile development environment experience, familiarity with SCRUM
● Deep learning knowledge
Read more
NeoQuant Solutions Pvt Ltd
Shehnaz Siddiki
Posted by Shehnaz Siddiki
Mumbai
3 - 6 yrs
₹8L - ₹11L / yr
Microsoft Business Intelligence (MSBI)
SSIS
SQL Server Reporting Services (SSRS)
SQL server
Microsoft SQL Server
+3 more

MSBI Developer- 

We have the following opening in our organization:

Years of Experience: Experience of  4-8 years. 

Location- Mumbai ( Thane)/BKC/Andheri
Notice period: Max 15 days or Immediate 

Educational Qualification: MCA/ME/Msc-IT/BE/B-Tech/BCA/BSC IT in Computer Science/B.Tech

Requirements:

  •   3- 8 years of consulting or relevant work experience
  • Should be good in SQL Server 2008 R2 and above.
  • Should be excellent at SQL, SSRS & SSIS, SSAS,
  • Data modeling, Fact & dimension design, work on a data warehouse or dw architecture design.
  • Implementing new technology like power BI, power bi modeling.  
  • Knowledge of Azure or R-programming is an added advantage.
  • Experiences in BI and Visualization Technology (Tableau, Power  BI).
  • Advanced T-SQL programming skill
  • Can scope out a simple or semi-complex project based on business requirements and achievable benefits
  • Evaluate, design, and implement enterprise IT-based business solutions, often working on-site to help customers deploy their solutions.
Read more
Product based Company
Agency job
via Crewmates by Gowtham V
Coimbatore
4 - 15 yrs
₹5L - ₹25L / yr
ETL
Big Data
Hi Professionals,
We are looking for ETL Developer for Reputed Client @ Coimbatore Permanent role
Work Location : Coimbatore
Experience : 4+ Years
Skills ;
  •  Talend (or)Strong experience in any of the ETL Tools like (Informatica/Datastage/Talend)
  • DB preference (Teradata /Oracle /Sql server )
  • Supporting Tools (JIRA/SVN)
Notice Period : Immediate to 30 Days
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos