Cutshort logo
Amazon redshift jobs

28+ Amazon Redshift Jobs in India

Apply to 28+ Amazon Redshift Jobs on CutShort.io. Find your next job, effortlessly. Browse Amazon Redshift Jobs and apply today!

icon
Bengaluru (Bangalore), Mumbai, Delhi, Gurugram, Pune, Hyderabad, Ahmedabad, Chennai
3 - 7 yrs
₹8L - ₹15L / yr
AWS Lambda
Amazon S3
Amazon VPC
Amazon EC2
Amazon Redshift
+3 more

Technical Skills:


  • Ability to understand and translate business requirements into design.
  • Proficient in AWS infrastructure components such as S3, IAM, VPC, EC2, and Redshift.
  • Experience in creating ETL jobs using Python/PySpark.
  • Proficiency in creating AWS Lambda functions for event-based jobs.
  • Knowledge of automating ETL processes using AWS Step Functions.
  • Competence in building data warehouses and loading data into them.


Responsibilities:


  • Understand business requirements and translate them into design.
  • Assess AWS infrastructure needs for development work.
  • Develop ETL jobs using Python/PySpark to meet requirements.
  • Implement AWS Lambda for event-based tasks.
  • Automate ETL processes using AWS Step Functions.
  • Build data warehouses and manage data loading.
  • Engage with customers and stakeholders to articulate the benefits of proposed solutions and frameworks.
Read more
Optisol Business Solutions Pvt Ltd
Veeralakshmi K
Posted by Veeralakshmi K
Remote, Chennai, Coimbatore, Madurai
4 - 10 yrs
₹10L - ₹15L / yr
skill iconPython
SQL
Amazon Redshift
Amazon RDS
AWS Simple Notification Service (SNS)
+5 more

Role Summary


As a Data Engineer, you will be an integral part of our Data Engineering team supporting an event-driven server less data engineering pipeline on AWS cloud, responsible for assisting in the end-to-end analysis, development & maintenance of data pipelines and systems (DataOps). You will work closely with fellow data engineers & production support to ensure the availability and reliability of data for analytics and business intelligence purposes.


Requirements:


·      Around 4 years of working experience in data warehousing / BI system.

·      Strong hands-on experience with Snowflake AND strong programming skills in Python

·      Strong hands-on SQL skills

·      Knowledge with any of the cloud databases such as Snowflake,Redshift,Google BigQuery,RDS,etc.

·      Knowledge on debt for cloud databases

·      AWS Services such as SNS, SQS, ECS, Docker, Kinesis & Lambda functions

·      Solid understanding of ETL processes, and data warehousing concepts

·      Familiarity with version control systems (e.g., Git/bit bucket, etc.) and collaborative development practices in an agile framework

·      Experience with scrum methodologies

·      Infrastructure build tools such as CFT / Terraform is a plus.

·      Knowledge on Denodo, data cataloguing tools & data quality mechanisms is a plus.

·      Strong team player with good communication skills.


Overview Optisol Business Solutions


OptiSol was named on this year's Best Companies to Work for list by Great place to work. We are a team of about 500+ Agile employees with a development center in India and global offices in the US, UK (United Kingdom), Australia, Ireland, Sweden, and Dubai. 16+ years of joyful journey and we have built about 500+ digital solutions. We have 200+ happy and satisfied clients across 24 countries.


Benefits, working with Optisol


·      Great Learning & Development program

·      Flextime, Work-at-Home & Hybrid Options

·      A knowledgeable, high-achieving, experienced & fun team.

·      Spot Awards & Recognition.

·      The chance to be a part of next success story.

·      A competitive base salary.


More Than Just a Job, We Offer an Opportunity To Grow. Are you the one, who looks out to Build your Future & Build your Dream? We have the Job for you, to make your dream comes true.

Read more
hopscotch
Bengaluru (Bangalore)
5 - 8 yrs
₹6L - ₹15L / yr
skill iconPython
Amazon Redshift
skill iconAmazon Web Services (AWS)
PySpark
Data engineering
+3 more

About the role:

 Hopscotch is looking for a passionate Data Engineer to join our team. You will work closely with other teams like data analytics, marketing, data science and individual product teams to specify, validate, prototype, scale, and deploy data pipelines features and data architecture.


Here’s what will be expected out of you:

➢ Ability to work in a fast-paced startup mindset. Should be able to manage all aspects of data extraction transfer and load activities.

➢ Develop data pipelines that make data available across platforms.

➢ Should be comfortable in executing ETL (Extract, Transform and Load) processes which include data ingestion, data cleaning and curation into a data warehouse, database, or data platform.

➢ Work on various aspects of the AI/ML ecosystem – data modeling, data and ML pipelines.

➢ Work closely with Devops and senior Architect to come up with scalable system and model architectures for enabling real-time and batch services.


What we want:

➢ 5+ years of experience as a data engineer or data scientist with a focus on data engineering and ETL jobs.

➢ Well versed with the concept of Data warehousing, Data Modelling and/or Data Analysis.

➢ Experience using & building pipelines and performing ETL with industry-standard best practices on Redshift (more than 2+ years).

➢ Ability to troubleshoot and solve performance issues with data ingestion, data processing & query execution on Redshift.

➢ Good understanding of orchestration tools like Airflow.

 ➢ Strong Python and SQL coding skills.

➢ Strong Experience in distributed systems like spark.

➢ Experience with AWS Data and ML Technologies (AWS Glue,MWAA, Data Pipeline,EMR,Athena, Redshift,Lambda etc).

➢ Solid hands on with various data extraction techniques like CDC or Time/batch based and the related tools (Debezium, AWS DMS, Kafka Connect, etc) for near real time and batch data extraction.


Note :

Product based companies, Ecommerce companies is added advantage

Read more
Softobiz Technologies Private limited

at Softobiz Technologies Private limited

2 candid answers
1 recruiter
Adlin Asha
Posted by Adlin Asha
Hyderabad
8 - 18 yrs
₹15L - ₹30L / yr
ETL
Informatica
Data Warehouse (DWH)
Amazon Redshift
skill iconPostgreSQL
+2 more

Experience: 8+ Years

Work Location: Hyderabad

Mode of work: Work from Office


Senior Data Engineer / Architect

 

Summary of the Role

 

The Senior Data Engineer / Architect will be a key role within the data and technology team, responsible for engineering and building data solutions that enable seamless use of data within the organization. 

 

Core Activities

-         Work closely with the business teams and business analysts to understand and document data usage requirements

-         Develop designs relating to data engineering solutions including data pipelines, ETL, data warehouse, data mart and data lake solutions

-         Develop data designs for reporting and other data use requirements

-         Develop data governance solutions that provide data governance services including data security, data quality, data lineage etc.

-         Lead implementation of data use and data quality solutions

-         Provide operational support for users for the implemented data solutions

-         Support development of solutions that automate reporting and business intelligence requirements

-         Support development of machine learning and AI solution using large scale internal and external datasets

 

Other activities

-         Work on and manage technology projects as and when required

-         Provide user and technical training on data solutions

 

Skills and Experience

-         At least 5-8 years of experience in a senior data engineer / architect role

-         Strong experience with AWS based data solutions including AWS Redshift, analytics and data governance solutions

-         Strong experience with industry standard data governance / data quality solutions  

-         Strong experience with managing a Postgres SQL data environment

-         Background as a software developer working in AWS / Python will be beneficial

-         Experience with BI tools like Power BI and Tableau

Strong written and oral communication skills

 

Read more
decisionfoundry
Christy Philip
Posted by Christy Philip
Remote only
2 - 5 yrs
Best in industry
Amazon Redshift
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
+1 more

Description

About us

Welcome to Decision Foundry!

We are both a high growth startup and one of the longest tenured Salesforce Marketing Cloud Implementation Partners in the ecosystem. Forged from a 19-year-old web analytics company, Decision Foundry is the leader in Salesforce intelligence solutions.

We win as an organization through our core tenets. They include:

  • One Team. One Theme.
  • We sign it. We deliver it.
  • Be Accountable and Expect Accountability.
  • Raise Your Hand or Be Willing to Extend it

Requirements

• Strong understanding of data management principles and practices (Preferred experience: AWS Redshift).

• Experience with Tableau server administration, including user management and permissions (preferred, not mandatory).

• Ability to monitor alerts and application logs for data processing issues and troubleshooting.

• Ability to handle and monitor support tickets queues and act accordingly based on SLAs and priority.

• Ability to work collaboratively with cross-functional teams, including Data Engineers and BI team.

• Strong analytical and problem-solving skills.

• Familiar with data warehousing concept and ETL processes.

• Experience with SQL, DBT and database technologies such as Redshift, Postgres, MongoDB, etc.

• Familiar with data integration tools such as Fivetran or Funnel.io

• Familiar with programming languages such as Python.

• Familiar with cloud-based data technologies such as AWS.

• Experience with data ingestion and orchestration tools such as AWS Glue.

• Excellent communication and interpersonal skills.

• Should possess experience of 2+ years.

Read more
Bengaluru (Bangalore)
5 - 10 yrs
₹20L - ₹40L / yr
skill iconNodeJS (Node.js)
skill iconReact.js
skill iconAngular (2+)
skill iconAngularJS (1.x)
skill iconMongoDB
+18 more

One of our premium-based customers, we are looking to hire a team of Sr. Full Stack Developers in Bangalore, looking for Tech Geeks, who have 5+ years of experience full-time.


PFB the JD:


Responsibilities:

Develop applications conforming to requirement specifications.

Perform code reviews, and unit testing and adhere to and be an advocate of best development practices.

Collaborate with cross-functional teams & squads.

 

Requirement:

  • 5+ years of web development experience using Node.js or similar web technologies.
  • Well-versed with front-end code in HTML5, CSS3, Javascript, TypeScript React.js with familiarity in various frameworks and template languages.
  • Possess a strong understanding of Functional Programming, especially in JavaScript.
  • Proficient with database design, optimization, and tuning in MySQL or MongoDB.
  • Experience in design patterns, unit testing, and automation techniques (ex. Playwright).
  • Exposure to Amazon Web Services (EC2, S3, EBS, RDS, SQS, Redshift, etc.)
  • Exposure to Docker and Kubernetes.
  • Exposure to collaborating tools like GitHub, JIRA, and Confluence.
  • Experience in frameworks such as Express.js, and NestJs or proven ability to learn on the job.
  • Experience in Microservices and REST architecture.
  • Exposure to Scrum methodology and XP technical practices such as unit testing, pair programming, test-driven development, continuous integration, or continuous delivery.
  • Self-motivated, fast learner, detail-oriented, team player, and a sense of humor.

 

Read more
Personal Care Product Manufacturing
Mumbai
3 - 8 yrs
₹12L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

DATA ENGINEER


Overview

They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.

 

Job Description:

We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.


Responsibilities:

Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.

Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.

Optimize and tune the performance of data systems to ensure efficient data processing and analysis.

Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.

Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.

Implement and maintain data governance and security measures to protect sensitive data.

Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.

Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.


Qualifications:

Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.

Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.

Strong programming skills in languages such as Python, Java, or Scala.

Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.

Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).

Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).

Solid understanding of data modeling, data warehousing, and ETL principles.

Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).

Strong problem-solving and analytical skills, with the ability to handle complex data challenges.

Excellent communication and collaboration skills to work effectively in a team environment.


Preferred Qualifications:

Advanced knowledge of distributed computing and parallel processing.

Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).

Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).

Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).

Experience with data visualization and reporting tools (e.g., Tableau, Power BI).

Certification in relevant technologies or data engineering disciplines.



Read more
Graasai
Vineet A
Posted by Vineet A
Pune
3 - 7 yrs
₹10L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+9 more

Graas uses predictive AI to turbo-charge growth for eCommerce businesses. We are “Growth-as-a-Service”. Graas is a technology solution provider using predictive AI to turbo-charge growth for eCommerce businesses. Graas integrates traditional data silos and applies a machine-learning AI engine, acting as an in-house data scientist to predict trends and give real-time insights and actionable recommendations for brands. The platform can also turn insights into action by seamlessly executing these recommendations across marketplace store fronts, brand.coms, social and conversational commerce, performance marketing, inventory management, warehousing, and last mile logistics - all of which impacts a brand’s bottom line, driving profitable growth.


Roles & Responsibilities:

Work on implementation of real-time and batch data pipelines for disparate data sources.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS technologies.
  • Build and maintain an analytics layer that utilizes the underlying data to generate dashboards and provide actionable insights.
  • Identify improvement areas in the current data system and implement optimizations.
  • Work on specific areas of data governance including metadata management and data quality management.
  • Participate in discussions with Product Management and Business stakeholders to understand functional requirements and interact with other cross-functional teams as needed to develop, test, and release features.
  • Develop Proof-of-Concepts to validate new technology solutions or advancements.
  • Work in an Agile Scrum team and help with planning, scoping and creation of technical solutions for the new product capabilities, through to continuous delivery to production.
  • Work on building intelligent systems using various AI/ML algorithms. 

 

Desired Experience/Skill:

 

  • Must have worked on Analytics Applications involving Data Lakes, Data Warehouses and Reporting Implementations.
  • Experience with private and public cloud architectures with pros/cons.
  • Ability to write robust code in Python and SQL for data processing. Experience in libraries such as Pandas is a must; knowledge of one of the frameworks such as Django or Flask is a plus.
  • Experience in implementing data processing pipelines using AWS services: Kinesis, Lambda, Redshift/Snowflake, RDS.
  • Knowledge of Kafka, Redis is preferred
  • Experience on design and implementation of real-time and batch pipelines. Knowledge of Airflow is preferred.
  • Familiarity with machine learning frameworks (like Keras or PyTorch) and libraries (like scikit-learn)
Read more
Cubera Tech India Pvt Ltd
Bengaluru (Bangalore), Chennai
5 - 8 yrs
Best in industry
Data engineering
Big Data
skill iconJava
skill iconPython
Hibernate (Java)
+10 more

Data Engineer- Senior

Cubera is a data company revolutionizing big data analytics and Adtech through data share value principles wherein the users entrust their data to us. We refine the art of understanding, processing, extracting, and evaluating the data that is entrusted to us. We are a gateway for brands to increase their lead efficiency as the world moves towards web3.

What are you going to do?

Design & Develop high performance and scalable solutions that meet the needs of our customers.

Closely work with the Product Management, Architects and cross functional teams.

Build and deploy large-scale systems in Java/Python.

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

Create data tools for analytics and data scientist team members that assist them in building and optimizing their algorithms.

Follow best practices that can be adopted in Bigdata stack.

Use your engineering experience and technical skills to drive the features and mentor the engineers.

What are we looking for ( Competencies) :

Bachelor’s degree in computer science, computer engineering, or related technical discipline.

Overall 5 to 8 years of programming experience in Java, Python including object-oriented design.

Data handling frameworks: Should have a working knowledge of one or more data handling frameworks like- Hive, Spark, Storm, Flink, Beam, Airflow, Nifi etc.

Data Infrastructure: Should have experience in building, deploying and maintaining applications on popular cloud infrastructure like AWS, GCP etc.

Data Store: Must have expertise in one of general-purpose No-SQL data stores like Elasticsearch, MongoDB, Redis, RedShift, etc.

Strong sense of ownership, focus on quality, responsiveness, efficiency, and innovation.

Ability to work with distributed teams in a collaborative and productive manner.

Benefits:

Competitive Salary Packages and benefits.

Collaborative, lively and an upbeat work environment with young professionals.

Job Category: Development

Job Type: Full Time

Job Location: Bangalore

 

Read more
People Impact
Agency job
via People Impact by Pruthvi K
Remote only
4 - 10 yrs
₹10L - ₹20L / yr
Amazon Redshift
Datawarehousing
skill iconAmazon Web Services (AWS)
Snow flake schema
Data Warehouse (DWH)

Job Title: Data Warehouse/Redshift Admin

Location: Remote

Job Description

AWS Redshift Cluster Planning

AWS Redshift Cluster Maintenance

AWS Redshift Cluster Security

AWS Redshift Cluster monitoring.

Experience managing day to day operations of provisioning, maintaining backups, DR and monitoring of AWS RedShift/RDS clusters

Hands-on experience with Query Tuning in high concurrency environment

Expertise setting up and managing AWS Redshift

AWS certifications Preferred (AWS Certified SysOps Administrator)

Read more
Encubate Tech Private Ltd
Mumbai
5 - 6 yrs
₹15L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon Redshift
Data modeling
ITL
Agile/Scrum
+7 more

Roles and

Responsibilities

Seeking AWS Cloud Engineer /Data Warehouse Developer for our Data CoE team to

help us in configure and develop new AWS environments for our Enterprise Data Lake,

migrate the on-premise traditional workloads to cloud. Must have a sound

understanding of BI best practices, relational structures, dimensional data modelling,

structured query language (SQL) skills, data warehouse and reporting techniques.

 Extensive experience in providing AWS Cloud solutions to various business

use cases.

 Creating star schema data models, performing ETLs and validating results with

business representatives

 Supporting implemented BI solutions by: monitoring and tuning queries and

data loads, addressing user questions concerning data integrity, monitoring

performance and communicating functional and technical issues.

Job Description: -

This position is responsible for the successful delivery of business intelligence

information to the entire organization and is experienced in BI development and

implementations, data architecture and data warehousing.

Requisite Qualification

Essential

-

AWS Certified Database Specialty or -

AWS Certified Data Analytics

Preferred

Any other Data Engineer Certification

Requisite Experience

Essential 4 -7 yrs of experience

Preferred 2+ yrs of experience in ETL & data pipelines

Skills Required

Special Skills Required

 AWS: S3, DMS, Redshift, EC2, VPC, Lambda, Delta Lake, CloudWatch etc.

 Bigdata: Databricks, Spark, Glue and Athena

 Expertise in Lake Formation, Python programming, Spark, Shell scripting

 Minimum Bachelor’s degree with 5+ years of experience in designing, building,

and maintaining AWS data components

 3+ years of experience in data component configuration, related roles and

access setup

 Expertise in Python programming

 Knowledge in all aspects of DevOps (source control, continuous integration,

deployments, etc.)

 Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD

 Hands on ETL development experience, preferably using or SSIS

 SQL Server experience required

 Strong analytical skills to solve and model complex business requirements

 Sound understanding of BI Best Practices/Methodologies, relational structures,

dimensional data modelling, structured query language (SQL) skills, data

warehouse and reporting techniques

Preferred Skills

Required

 Experience working in the SCRUM Environment.

 Experience in Administration (Windows/Unix/Network/Database/Hadoop) is a

plus.

 Experience in SQL Server, SSIS, SSAS, SSRS

 Comfortable with creating data models and visualization using Power BI

 Hands on experience in relational and multi-dimensional data modelling,

including multiple source systems from databases and flat files, and the use of

standard data modelling tools

 Ability to collaborate on a team with infrastructure, BI report development and

business analyst resources, and clearly communicate solutions to both

technical and non-technical team members

Read more
Series A Funded product Startup
Agency job
via Qrata by Blessy Fernandes
Hyderabad
2 - 8 yrs
₹8L - ₹32L / yr
J2EE
J2SE
skill iconJava
skill iconPython
MySQL
+7 more
Requirements: Job Description
 Excellent knowledge in Core Java (J2SE) and J2EE technologies.
 Hands-on experience with RESTful services, API design are must.
 Knowledge of microservices architecture is must.
 Knowledge of design patterns is must.
 Strong knowledge in Exception handling and logging mechanism is must.
 Agile scrum participation experience. Work experience with several agile teams on an application built
with microservices and event-based architectures to be deployed on hybrid (on-prem/cloud)
environments.
 Good knowledge of Spring framework (MVC, Cloud, Data and Security. Etc) and ORM framework like
JPA/Hibernate.
 Experience in managing the Source Code Base through Version Control tool like SVN, GitHub,
Bitbucket, etc.
 Experience in using and configuration of Continuous Integration tools Jenkins, Travis, GitLab, etc.
 Experience in design and development of SaaS/PaaS based architecture and tenancy models.
 Experience in SaaS/PaaS based application development used by a high volume of
subscribers/customers.
 Awareness and understanding of data security and privacy.
 Experience in performing Java Code Review using review tools like SonarQube, etc.
 Good understanding of end-to-end software development lifecycle. Ability to read and understand
requirements and design documents.
 Good Analytical skills and should be self-driven.
 Good communication with inter-personal skills.
 Open for learning new technologies and domain.
 A good team player and ready to take up new challenges. Active communication and coordination with
Clients and Internal stake holder
Requirements: Skills and Qualifications
 6-8 years of experience in developing Java/J2EE based Enterprise Web Applications
 Languages: Java, J2EE, and Python
 Databases: MySQL, Oracle, SQL Server, PostgreSQL, Redshift, MongoDB
 DB Script: SQL and PL/SQL
 Frameworks: Spring, Spring Boot, Jersey, Hibernate and JPA
 OS: Windows, Linux/Unix.
 Cloud Services: AWS and Azure
 Version Controls/ Devops tools: Git, Bitbucket and Jenkins.
 Message brokers: RabbitMQ, and Kafka
 Deployment Servers: Tomcat, Docker, and Kubernetes
 Build Tools: Gradle/Maven
Read more
Perfios
Agency job
via Seven N Half by Susmitha Goddindla
Bengaluru (Bangalore)
4 - 6 yrs
₹4L - ₹15L / yr
SQL
ETL tool
python developer
skill iconMongoDB
skill iconData Science
+15 more
Job Description
1. ROLE AND RESPONSIBILITIES
1.1. Implement next generation intelligent data platform solutions that help build high performance distributed systems.
1.2. Proactively diagnose problems and envisage long term life of the product focusing on reusable, extensible components.
1.3. Ensure agile delivery processes.
1.4. Work collaboratively with stake holders including product and engineering teams.
1.5. Build best-practices in the engineering team.
2. PRIMARY SKILL REQUIRED
2.1. Having a 2-6 years of core software product development experience.
2.2. Experience of working with data-intensive projects, with a variety of technology stacks including different programming languages (Java,
Python, Scala)
2.3. Experience in building infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data
sources to support other teams to run pipelines/jobs/reports etc.
2.4. Experience in Open-source stack
2.5. Experiences of working with RDBMS databases, NoSQL Databases
2.6. Knowledge of enterprise data lakes, data analytics, reporting, in-memory data handling, etc.
2.7. Have core computer science academic background
2.8. Aspire to continue to pursue career in technical stream
3. Optional Skill Required:
3.1. Understanding of Big Data technologies and Machine learning/Deep learning
3.2. Understanding of diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
3.3. Understanding of Cloud Platform: AWS, Azure, GCP, etc.
3.4. Experience in BFSI domain is a plus.
4. PREFERRED SKILLS
4.1. A Startup mentality: comfort with ambiguity, a willingness to test, learn and improve rapidl
Read more
Seed funded product start-up
Agency job
via Qrata by Blessy Fernandes
Remote only
4 - 8 yrs
₹30L - ₹60L / yr
skill iconRuby on Rails (ROR)
skill iconRuby
skill iconPostgreSQL
skill iconElastic Search
skill iconRedis
+1 more

Job Description

This is a remote position.

We’re in search of a senior back-end engineer that specializes in building highly scalable, highly available, reliable, secure, and fault-tolerant systems.
As a part of the Core Backend team, you'll be working on some complex and interesting problems such as building suggestion algorithms to bring out meaningful relations between our users.
With a lean team like ours, you'll have ample opportunities to work with various aspects of the application and build something meaningful.

Requirements

Must-haves:
  • You need to be excited about the problem that we are trying to solve
  • Should have excellent command over designing systems with a minimum experience of 4 years.
  • Should have experience with various database flavors and writing complex queries. We work with Postgres, Redshift, ElasticSearch, TimescaleDB, and Redis.
  • Additional knowledge in Golang will be an advantage
  • Strong Data Structures and Algorithms knowledge
  • Should have created APIs from scratch which is being used in Production
  • Should be comfortable with creating systems handling up to 100k requests per minute, and have a mindset that understands scale
  • Ability to give clarity and communicate well with the team, including Product, Backend and Dev-Ops (if and when needed)
  • Ability to use profiling tools well, getting to root causes of bugs fast
  • Ability to pick and work on adjacent technologies if and when required (Eg: If the best performance monitoring solution needs a basic firebase setup, that should not be a blocker for you to go ahead and do so)
  • Knowledge of how to set up relevant test cases
  • You need to be comfortable working in a remote environment (Good internet connection and availability on phone is required)
Good to have
  1. App Development experience
  1. Experience in Test Driven Development
  1. Ability to tell Product what creates better User Experience, ability to tell frontend what API responses will help the user get a faster load time
  1. Inquisitiveness to understand the system as a whole, and not only be stuck to your domain of expertise (Eg: Figure out why excessive APIs calls are being made, brainstorm with Product and Frontend team to reduce the same without harming the UX )
  1. Experience in working on a Social Media Product
  1. Good knowledge of Graphs, and their applications
  1. Data-Driven Approach to monitoring
 

Benefits

  1. Work timings - You are the master of your time. However, with great freedom comes great responsibility. If you have committed something to the team, we expect that you will give it your best to make sure that commitment is done and is done on time.
  2. Leave policy - Take a leave whenever for whatever reason you want. You don't need to explain yourself to us
  3. Health insurance for you and your family
Read more
Ushur Technologies Pvt Ltd

at Ushur Technologies Pvt Ltd

1 video
2 recruiters
Priyanka N
Posted by Priyanka N
Bengaluru (Bangalore)
6 - 12 yrs
Best in industry
skill iconMongoDB
Spark
Hadoop
Big Data
Data engineering
+5 more
What You'll Do:
● Our Infrastructure team is looking for an excellent Big Data Engineer to join a core group that
designs the industry’s leading Micro-Engagement Platform. This role involves design and
implementation of architectures and frameworks of big data for industry’s leading intelligent
workflow automation platform. As a specialist in Ushur Engineering team, your responsibilities will
be to:
● Use your in-depth understanding to architect and optimize databases and data ingestion pipelines
● Develop HA strategies, including replica sets and sharding to for highly available clusters
● Recommend and implement solutions to improve performance, resource consumption, and
resiliency
● On an ongoing basis, identify bottlenecks in databases in development and production
environments and propose solutions
● Help DevOps team with your deep knowledge in the area of database performance, scaling,
tuning, migration & version upgrades
● Provide verifiable technical solutions to support operations at scale and with high availability
● Recommend appropriate data processing toolset and big data ecosystems to adopt
● Design and scale databases and pipelines across multiple physical locations on cloud
● Conduct Root-cause analysis of data issues
● Be self-driven, constantly research and suggest latest technologies

The experience you need:
● Engineering degree in Computer Science or related field
● 10+ years of experience working with databases, most of which should have been around
NoSql technologies
● Expertise in implementing and maintaining distributed, Big data pipelines and ETL
processes
● Solid experience in one of the following cloud-native data platforms (AWS Redshift/ Google
BigQuery/ SnowFlake)
● Exposure to real time processing techniques like Apache Kafka and CDC tools
(Debezium, Qlik Replicate)
● Strong experience in Linux Operating System
● Solid knowledge of database concepts, MongoDB, SQL, and NoSql internals
● Experience with backup and recovery for production and non-production environments
● Experience in security principles and its implementation
● Exceptionally passionate about always keeping the product quality bar at an extremely
high level
Nice-to-haves
● Proficient with one or more of Python/Node.Js/Java/similar languages

Why you want to Work with Us:
● Great Company Culture. We pride ourselves on having a values-based culture that
is welcoming, intentional, and respectful. Our internal NPS of over 65 speaks for
itself - employees recommend Ushur as a great place to work!
● Bring your whole self to work. We are focused on building a diverse culture, with
innovative ideas where you and your ideas are valued. We are a start-up and know
that every person has a significant impact!
● Rest and Relaxation. 13 Paid leaves, wellness Fridays offs (aka a day off to care
for yourself- every last Friday of the month), 12 paid sick Leaves, and more!
● Health Benefits. Preventive health checkups, Medical Insurance covering the
dependents, wellness sessions, and health talks at the office
● Keep learning. One of our core values is Growth Mindset - we believe in lifelong
learning. Certification courses are reimbursed. Ushur Community offers wide
resources for our employees to learn and grow.
● Flexible Work. In-office or hybrid working model, depending on position and
location. We seek to create an environment for all our employees where they can
thrive in both their profession and personal life.
Read more
Gurugram, Delhi, Noida, Ghaziabad, Faridabad
4 - 8 yrs
₹10L - ₹15L / yr
skill iconNodeJS (Node.js)
skill iconMongoDB
Mongoose
skill iconExpress
CI/CD
+10 more
RESPONSIBILITIES

• Proven working experience in backend app development and experience with Node JS.
• Build advanced ecommerce backend applications for the multiple client platforms (both React and Android).
• Understanding of design principles and good architecture patterns.
• Proper Data Structures and Algorithm knowledge is a must.
• Graph QL and Apollo Server knowledge.
• Collaborate with cross-functional teams to define, design, and ship new features.
• Work with outside data sources and APIs like the one of Unicommerce.
• Create Unit-test code for robustness, including edge cases, usability, and general reliability.
• Work on bug fixing and improving application performance.
• Continuously discover, evaluate, and implement new technologies to maximize development efficiency.
• Translate designs and wireframes into high quality code.
• Have a good understanding of CI/CD tools (any).
• Robust knowledge of popular databases like MongoDB, Elastic Search, DynamoDB, Redis etc;
• Knowledge about AWS Services like EC2, Lambda, Kinesis, Redshift, S3 is super plus.


MUST HAVE
• CI/CD
• 3+ years in Node JS
• HTML, CSS, JavaScript
• MongoDB, Elastic Search, DynamoDB, Redis
• AWS Services like EC2, Lambda, Kinesis, Redshift, S3 is super plus.
• Data Structures and Algorithm knowledge is a must.
Read more
Remote only
5 - 10 yrs
₹30L - ₹50L / yr
skill iconJava
skill iconPython
skill iconRuby
skill iconRuby on Rails (ROR)
skill iconGo Programming (Golang)
+4 more
Requirements

Must-haves:

● · You need to be excited about the problem that we are trying to solve
○ Should have excellent command over designing systems with a minimum
experience of 4 years.
○ Should have experience with various database flavors and writing complex
queries. We work with Postgres, Redshift, ElasticSearch, TimescaleDB, and
Redis.
● Additional knowledge in Golang will be an advantage
● · Strong Data Structures and Algorithms knowledge
● · Should have created APIs from scratch which is being used in Production
● · Should be comfortable with creating systems handling up to 100k requests per
minute, and have a mindset that understands scale
● · Ability to give clarity and communicate well with the team, including Product, Backend
and Dev-Ops (if and when needed)
● · Ability to use profiling tools well, getting to root causes of bugs fast
● · Ability to pick and work on adjacent technologies if and when required (Eg: If the best
performance monitoring solution needs a basic firebase setup, that should not be a
blocker for you to go ahead and do so)
● · Knowledge of how to set up relevant test cases
○ · You need to be comfortable working in a remote environment (Good
internet connection and availability on phone is required)

Good to have

● · App Development experience
● · Experience in Test Driven Development
● · Ability to tell Product what creates better User Experience, ability to tell frontend what
API responses will help the user get a faster load time
● · Inquisitiveness to understand the system as a whole, and not only be stuck to your
domain of expertise (Eg: Figure out why excessive APIs calls are being made,
brainstorm with Product and Frontend team to reduce the same without harming the UX
)
● · Experience in working on a Social Media Product
● · Good knowledge of Graphs, and their applications
● · Data-Driven Approach to monitoring

Benefits

● Work timings - You are the master of your time. However, with great freedom
comes great responsibility. If you have committed something to the team, we
expect that you will give it your best to make sure that commitment is done and is
done on time.

Leave policy - Take a leave whenever for whatever reason you want. You
don't need to explain yourself to us

Health insurance for you and your famil
Read more
Remote only
5 - 8 yrs
₹20L - ₹25L / yr
Tableau
SQL
Relational Database (RDBMS)
Amazon Redshift
skill iconPostgreSQL
+3 more

What is the role?

You will be responsible for building and maintaining highly scalable data infrastructure for our cloud-hosted SAAS product. You will work closely with the Product Managers and Technical team to define and implement data pipelines for customer-facing and internal reports.

Key Responsibilities

  • Understanding of the business process and requirements thoroughly and convert them to the reports.
  • Should be able to suggest the right way to the users of the reports.
  • Developing, maintaining, and managing advanced reporting, analytics, dashboards and other BI solutions.

What are we looking for?

An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.

  • Education - BE/MCA or equivalent
  • Good experience in working on the performance side of the reports.
  • Expert level knowlege of querying in any RDBMS and preferrably in Redshift or Postgress
  • Expert level knowledge of Datawarehousing concepts
  • Advanced level scripting to create calculated fields, sets, parameters, etc
  • Degree in mathematics, computer science, information systems, or related field.
  • 5-7 years of exclusive experience Tableau and Dataware house.

Whom will you work with?

You will work with a top-notch tech team, working closely with the CTO and product team.  

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality on content, interact and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits.

We are

A fast-growing SaaS commerce company based in Bangalore with offices in Delhi, Mumbai, SF, Dubai, Singapore and Dublin. We have three products in our portfolio: Plum, Empuls and Compass. Works with over 1000 global clients. We help our clients in engaging and motivating their employees, sales teams, channel partners or consumers for better business results.

 

Read more
company logo
Agency job
via Cloud Counselage by Umme Salma
Remote only
2 - 3 yrs
₹8L - ₹12L / yr
skill iconPython
Databases
SQL
Amazon Redshift
skill iconAmazon Web Services (AWS)
+3 more
Role Description: 
As a Data Engineer you have to understand Organisation data sets and how to bring them together. You have to work with sales engineering team to support custom solutions offered to the client. Filling the gap between development, sales engineering and data ops and creating, maintaining and documenting scripts to support ongoing custom solutions.

Job Responsibilities: 

  • Collaborating across an agile team to continuously design, iterate, and develop big data systems.
  • Extracting, transforming, and loading data into internal databases.
  • Optimizing our new and existing data pipelines for speed and reliability. 
  • Deploying new products and product improvements
  • Documenting and managing multiple repositories of code.

Mandatory Requirements:
  • Experience with Pandas to process the data and Jupyter notebooks to keep it all together. 
  • Familiar with pulling and pushing files from SFTP and AWS S3.  Familiarity with AWS Athena and Redshift is mandatory. 
  • Familiarity with SQL programming to query and transform data from relational Databases. 
  • Familiarities with AWS Cloud and Linux (and Linux work environment) are mandatory. 
  • Excellent written and verbal communication skills.

Desired Requirements:
  • Excellent organizational skills, including attention to precise details. 
  • Strong multitasking skills and ability to work in a fast-paced environment  Know your way around REST APIs (Able to integrate not necessary to publish)

Qualities:
  • Python 
  • Sql
  • API 
  • AWS 
  • GCP 
  • OCI 
  • Azure 
  • Redshift

Eligibility Criteria:
  • 5 years experience in database systems 
  • 3 years experience with Python to develop scripts

What’s for the Candidate:
12 LPA

Job Location(s)
Hyderabad / Remote
Read more
EASEBUZZ

at EASEBUZZ

1 recruiter
Amala Baby
Posted by Amala Baby
Pune
2 - 4 yrs
₹2L - ₹20L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+12 more

Company Profile:

 

Easebuzz is a payment solutions (fintech organisation) company which enables online merchants to accept, process and disburse payments through developer friendly APIs. We are focusing on building plug n play products including the payment infrastructure to solve complete business problems. Definitely a wonderful place where all the actions related to payments, lending, subscription, eKYC is happening at the same time.

 

We have been consistently profitable and are constantly developing new innovative products, as a result, we are able to grow 4x over the past year alone. We are well capitalised and have recently closed a fundraise of $4M in March, 2021 from prominent VC firms and angel investors. The company is based out of Pune and has a total strength of 180 employees. Easebuzz’s corporate culture is tied into the vision of building a workplace which breeds open communication and minimal bureaucracy. An equal opportunity employer, we welcome and encourage diversity in the workplace. One thing you can be sure of is that you will be surrounded by colleagues who are committed to helping each other grow.

 

Easebuzz Pvt. Ltd. has its presence in Pune, Bangalore, Gurugram.

 


Salary: As per company standards.

 

Designation: Data Engineering

 

Location: Pune

 

Experience with ETL, Data Modeling, and Data Architecture

Design, build and operationalize large scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties
- Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue.

Experience with AWS cloud data lake for development of real-time or near real-time use cases

Experience with messaging systems such as Kafka/Kinesis for real time data ingestion and processing

Build data pipeline frameworks to automate high-volume and real-time data delivery

Create prototypes and proof-of-concepts for iterative development.

Experience with NoSQL databases, such as DynamoDB, MongoDB etc

Create and maintain optimal data pipeline architecture,

Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.


Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies.

Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.

Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.

Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.

Evangelize a very high standard of quality, reliability and performance for data models and algorithms that can be streamlined into the engineering and sciences workflow

Build and enhance data pipeline architecture by designing and implementing data ingestion solutions.

 

Employment Type

Full-time

 

Read more
Leading Sales Enabler
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹40L / yr
ETL
Spark
skill iconPython
Amazon Redshift
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Bengaluru (Bangalore)
2 - 6 yrs
₹25L - ₹45L / yr
Data engineering
skill iconData Analytics
Big Data
Apache Spark
airflow
+8 more
2+ years of experience in a Data Engineer role.
● Proficiency in Linux.
● Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
● Must have SQL knowledge and experience working with relational databases, query
authoring (SQL) as well as familiarity with databases including Mysql, Mongo, Cassandra,
and Athena.
● Must have experience with Python/Scala.
● Must have experience with Big Data technologies like Apache Spark.
● Must have experience with Apache Airflow.
● Experience with data pipelines and ETL tools like AWS Glue.
Read more
A logistic Company
Agency job
via Anzy by Dattatraya Kolangade
Bengaluru (Bangalore)
5 - 7 yrs
₹18L - ₹25L / yr
Data engineering
ETL
SQL
Hadoop
Apache Spark
+13 more
Key responsibilities:
• Create and maintain data pipeline
• Build and deploy ETL infrastructure for optimal data delivery
• Work with various including product, design and executive team to troubleshoot data
related issues
• Create tools for data analysts and scientists to help them build and optimise the product
• Implement systems and process for data access controls and guarantees
• Distill the knowledge from experts in the field outside the org and optimise internal data
systems
Preferred qualifications/skills:
• 5+ years experience
• Strong analytical skills

____ 04

Freight Commerce Solutions Pvt Ltd. 

• Degree in Computer Science, Statistics, Informatics, Information Systems
• Strong project management and organisational skills
• Experience supporting and working with cross-functional teams in a dynamic environment
• SQL guru with hands on experience on various databases
• NoSQL databases like Cassandra, MongoDB
• Experience with Snowflake, Redshift
• Experience with tools like Airflow, Hevo
• Experience with Hadoop, Spark, Kafka, Flink
• Programming experience in Python, Java, Scala
Read more
netmedscom

at netmedscom

3 recruiters
Vijay Hemnath
Posted by Vijay Hemnath
Chennai
5 - 10 yrs
₹10L - ₹30L / yr
skill iconMachine Learning (ML)
Software deployment
CI/CD
Cloud Computing
Snow flake schema
+19 more

We are looking for an outstanding ML Architect (Deployments) with expertise in deploying Machine Learning solutions/models into production and scaling them to serve millions of customers. A candidate with an adaptable and productive working style which fits in a fast-moving environment.

 

Skills:

- 5+ years deploying Machine Learning pipelines in large enterprise production systems.

- Experience developing end to end ML solutions from business hypothesis to deployment / understanding the entirety of the ML development life cycle.
- Expert in modern software development practices; solid experience using source control management (CI/CD).
- Proficient in designing relevant architecture / microservices to fulfil application integration, model monitoring, training / re-training, model management, model deployment, model experimentation/development, alert mechanisms.
- Experience with public cloud platforms (Azure, AWS, GCP).
- Serverless services like lambda, azure functions, and/or cloud functions.
- Orchestration services like data factory, data pipeline, and/or data flow.
- Data science workbench/managed services like azure machine learning, sagemaker, and/or AI platform.
- Data warehouse services like snowflake, redshift, bigquery, azure sql dw, AWS Redshift.
- Distributed computing services like Pyspark, EMR, Databricks.
- Data storage services like cloud storage, S3, blob, S3 Glacier.
- Data visualization tools like Power BI, Tableau, Quicksight, and/or Qlik.
- Proven experience serving up predictive algorithms and analytics through batch and real-time APIs.
- Solid working experience with software engineers, data scientists, product owners, business analysts, project managers, and business stakeholders to design the holistic solution.
- Strong technical acumen around automated testing.
- Extensive background in statistical analysis and modeling (distributions, hypothesis testing, probability theory, etc.)
- Strong hands-on experience with statistical packages and ML libraries (e.g., Python scikit learn, Spark MLlib, etc.)
- Experience in effective data exploration and visualization (e.g., Excel, Power BI, Tableau, Qlik, etc.)
- Experience in developing and debugging in one or more of the languages Java, Python.
- Ability to work in cross functional teams.
- Apply Machine Learning techniques in production including, but not limited to, neuralnets, regression, decision trees, random forests, ensembles, SVM, Bayesian models, K-Means, etc.

 

Roles and Responsibilities:

Deploying ML models into production, and scaling them to serve millions of customers.

Technical solutioning skills with deep understanding of technical API integrations, AI / Data Science, BigData and public cloud architectures / deployments in a SaaS environment.

Strong stakeholder relationship management skills - able to influence and manage the expectations of senior executives.
Strong networking skills with the ability to build and maintain strong relationships with both business, operations and technology teams internally and externally.

Provide software design and programming support to projects.

 

 Qualifications & Experience:

Engineering and post graduate candidates, preferably in Computer Science, from premier institutions with proven work experience as a Machine Learning Architect (Deployments) or a similar role for 5-7 years.

 

Read more
Pune
2 - 9 yrs
₹4L - ₹13L / yr
MS SQLServer
skill iconPostgreSQL
MySQL DBA
skill iconC#
MySQL
+1 more
SUMMARY
The Database Developer will perform day-to-day database management, maintenance and troubleshooting by providing Tier 1 and Tier 2 support for diverse platforms including, but not limited to, MS SQL, Azure SQL, MySQL,
PostgreSQL and Amazon Redshift.

They are responsible for maintaining functional/technical support documentation
and operational documentation as well as reporting on performance metrics associated with job activity and platform
stability.

Must adhere to SLAs pertaining to data movement and provide evidence and supporting documentation for incidents that violate those SLAs.

Other responsibilities include API development and integrations via Azure
Functions, C# or Python.


Essential Duties and Responsibilities

• Advanced problem-solving skills
• Excellent communication skills
• Advanced T-SQL scripting skills
• Query optimization and performance tuning familiarity with traces, execution plans and server
logs
• SSIS package development and support
• PowerShell scripting
• Report visualization via SSRS, Power BI and/or Jupityr Nootbook
• Maintain functional/technical support documentation
• Maintain operational documentation specific to automated jobs and job steps
• Develop, implement and support user defined stored procedures, functions and (indexed) views
• Monitor database activities and provide Tier 1 and Tier 2 production support
• Provide functional and technical support to ensure performance, operation and stability of
database systems
• Manage data ingress and egress
• Track issue and/or project deliverables in Jira
• Assist in RDBMS patching, upgrades and enhancements
• Prepare database reports for managers as needed
• API integrations and development
Background/Experience
• Bachelor or advanced degree in computer science
• Microsoft SQL Server 2016 or higher
• Working knowledge of MySQL, PostgreSQL and/or Amazon Redshift
• C# and/or Python


Supervisory/Budget Responsibility

• No Supervisory Responsibility/No Budget Responsibility


Level of Authority to Make Decisions

The Database Developers expedite issue resolution pursuant to the functional/technical documentation available.

Issue escalation is at their discretion and should result in additional functional/technical documentation for future
reference.

However, individual problem solving, decision making and performance tuning will constitute 75% of their time.
Read more
Remote, NCR (Delhi | Gurgaon | Noida)
3 - 12 yrs
₹8L - ₹14L / yr
Data Warehouse (DWH)
ETL
Amazon Redshift

Responsible for planning, connecting, designing, scheduling, and deploying data warehouse systems. Develops, monitors, and maintains ETL processes, reporting applications, and data warehouse design.

Role and Responsibility

·         Plan, create, coordinate, and deploy data warehouses.

·         Design end user interface.

·         Create best practices for data loading and extraction.

·         Develop data architecture, data modeling, and ETFL mapping solutions within structured data warehouse environment.

·         Develop reporting applications and data warehouse consistency.

·         Facilitate requirements gathering using expert listening skills and develop unique simple solutions to meet the immediate and long-term needs of business customers.

·         Supervise design throughout implementation process.

·         Design and build cubes while performing custom scripts.

·         Develop and implement ETL routines according to the DWH design and architecture.

·         Support the development and validation required through the lifecycle of the DWH and Business Intelligence systems, maintain user connectivity, and provide adequate security for data warehouse.

·         Monitor the DWH and BI systems performance and integrity provide corrective and preventative maintenance as required.

·         Manage multiple projects at once.

DESIRABLE SKILL SET

·         Experience with technologies such as MySQL, MongoDB, SQL Server 2008, as well as with newer ones like SSIS and stored procedures

·         Exceptional experience developing codes, testing for quality assurance, administering RDBMS, and monitoring of database

·         High proficiency in dimensional modeling techniques and their applications

·         Strong analytical, consultative, and communication skills; as well as the ability to make good judgment and work with both technical and business personnel

·         Several years working experience with Tableau,  MicroStrategy, Information Builders, and other reporting and analytical tools

·         Working knowledge of SAS and R code used in data processing and modeling tasks

·         Strong experience with Hadoop, Impala, Pig, Hive, YARN, and other “big data” technologies such as AWS Redshift or Google Big Data

 

Read more
Data ToBiz

at Data ToBiz

2 recruiters
PS Dhillon
Posted by PS Dhillon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹7L - ₹15L / yr
Datawarehousing
Amazon Redshift
Analytics
skill iconPython
skill iconAmazon Web Services (AWS)
+2 more
Job Responsibilities :  
As a Data Warehouse Engineer in our team, you should have a proven ability to deliver high-quality work on time and with minimal supervision.
Develops or modifies procedures to solve complex database design problems, including performance, scalability, security and integration issues for various clients (on-site and off-site).
Design, develop, test, and support the data warehouse solution.
Adapt best practices and industry standards, ensuring top quality deliverable''s and playing an integral role in cross-functional system integration.
Design and implement formal data warehouse testing strategies and plans including unit testing, functional testing, integration testing, performance testing, and validation testing.
Evaluate all existing hardware's and software's according to required standards and ability to configure the hardware clusters as per the scale of data.
Data integration using enterprise development tool-sets (e.g. ETL, MDM, Quality, CDC, Data Masking, Quality).
Maintain and develop all logical and physical data models for enterprise data warehouse (EDW).
Contributes to the long-term vision of the enterprise data warehouse (EDW) by delivering Agile solutions.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.  
Participate in data warehouse health monitoring and performance optimizations as well as quality documentation.

Job Requirements :  
2+ years experience working in software development & data warehouse development for enterprise analytics.
2+ years of working with Python with major experience in Red-shift as a must and exposure to other warehousing tools.
Deep expertise in data warehousing, dimensional modeling and the ability to bring best practices with regard to data management, ETL, API integrations, and data governance.
Experience working with data retrieval and manipulation tools for various data sources like Relational (MySQL, PostgreSQL, Oracle), Cloud-based storage.
Experience with analytic and reporting tools (Tableau, Power BI, SSRS, SSAS). Experience in AWS cloud stack (S3, Glue, Red-shift, Lake Formation).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business clients.
Knowledge of Logistics and/or Transportation Domain is a plus.
Ability to handle/ingest very huge data sets (both real-time data and batched data) in an efficient manner.
Read more
Data ToBiz

at Data ToBiz

2 recruiters
PS Dhillon
Posted by PS Dhillon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
2 - 6 yrs
₹7L - ₹15L / yr
ETL
skill iconAmazon Web Services (AWS)
Amazon Redshift
skill iconPython
Job Responsibilities : - Developing new data pipelines and ETL jobs for processing millions of records and it should be scalable with growth.
Pipelines should be optimised to handle both real time data, batch update data and historical data.
Establish scalable, efficient, automated processes for complex, large scale data analysis.
Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.
Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques.
Participate in data pipelines health monitoring and performance optimisations as well as quality documentation.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.

Job Requirements :-
2+ years experience working in software development & data pipeline development for enterprise analytics.
2+ years of working with Python with exposure to various warehousing tools
In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc.
Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must.
Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business client.
Knowledge of Logistics and/or Transportation Domain is a plus.
Hands-on with traditional databases and ERP systems like Sybase and People-soft.
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort