Sr. Data Base Engineer

at Information Technology Services

Agency job
icon
Pune
icon
5 - 8 yrs
icon
₹10L - ₹20L / yr
icon
Full time
Skills
ETL
Informatica
Data Warehouse (DWH)
Relational Database (RDBMS)
SQL
Oracle
NOSQL Databases
Object Oriented Programming (OOPs)
System Programming
Sr. Database Engineer :
Preferred Education & Experience:
•Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics or related technical field or equivalent practical experience.

Well-versed in and 5+ years of hands-on demonstrable experience with:
▪Data Analysis & Data Modeling
Database Design & Implementation
Database Performance Tuning & Optimization
▪PL/pgSQL & SQL
•5+ years of hands-on development experience in Relational Database (PostgreSQL/SQL
Server/Oracle).
•5+ years of hands-on development experience in SQL, PL/PgSQL, including stored procedures,
functions, triggers, and views.
Hands-on experience with demonstrable working experience in Database Design Principles, SQL
Query Optimization Techniques, Index Management, Integrity Checks, Statistics, and Isolation
levels.
Hands-on experience with demonstrable working experience in Database Read & Write
Performance Tuning & Optimization.
•Knowledge and Experience working in Domain Driven Design (DDD) Concepts, Object Oriented
Programming System (OOPS) Concepts, Cloud Architecture Concepts, NoSQL Database Concepts
are added values
•Knowledge and working experience in Oil & Gas, Financial, & Automotive Domains is a plus
Hands-on development experience in one or more NoSQL datastores such as Cassandra, HBase,
MongoDB, DynamoDB, Elastic Search, Neo4J, etc. a plus

Job Location : Pune/Remote

Work Timings : 2.30 pm-11:30 pm

Joining Period : Immediate-20 day
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Manager - Analytics

at Leading Grooming Platform

Agency job
via Qrata
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
Python
SQL
icon
Remote, Ahmedabad
icon
3 - 6 yrs
icon
₹15L - ₹25L / yr
  • Extensive exposure to at least one Business Intelligence Platform (if possible, QlikView/Qlik Sense) – if not Qlik, ETL tool knowledge, ex- Informatica/Talend
  • At least 1 Data Query language – SQL/Python
  • Experience in creating breakthrough visualizations
  • Understanding of RDMS, Data Architecture/Schemas, Data Integrations, Data Models and Data Flows is a must
Job posted by
Blessy Fernandes

Software Engineer Data/ETL

at Greendeck

Founded 2017  •  Product  •  20-100 employees  •  Profitable
Data Warehouse (DWH)
Informatica
ETL
Web Scraping
Python
Beautiful Soup
NOSQL Databases
MongoDB
Elastic Search
icon
Indore
icon
1 - 4 yrs
icon
₹6L - ₹10L / yr

About Us

Greendeck is one of Europe’s leading retail-tech companies. We use artificial intelligence to help retailers and retail brands make better pricing, marketing and merchandising decisions.

We are a small team - based across London and Indore - with a big vision and are growing rapidly. We’re a Techstars company, and are backed by some of London’s biggest VCs. We have been awarded UK’s ‘Startup of the Year’ and also featured by Forbes as a startup 'set to blaze a trail'. We are well funded, profitable and have customers across the globe from Japan, Austria and France to name a few.

We are looking for

A Software Engineer - Data/ETL who will be working on gathering data and improving the processes and systems. You'll also conduct research and analysis to assist the teams to deliver the highest quality products to our customers.

You'll work closely alongside our client-facing teams, machine learning engineers, frontend and wider technical team to build new capabilities, focused on speed and re liability.

Your work includes

  • Own, manage and take an active part in monitoring the big data pipeline that collects more than 100M items daily for the Greendeck platform
  • Use tools like Celery to create distributed data crunching systems
  • Use tools like Airflow to schedule and automate processes
  • Work closely with Greendeck's ML team to create ML/DL pipelines
  • Create REST APIs to supplement the Greendeck platform
  • Writing queries and scripts to automate various components of the data pipeline
  • Participate in incident-resolution on weekends if such need arises.

 

Skills/ Requirements

  • Good knowledge and experience in writing scripts in Python.
  • Good knowledge of the Scrapy framework and web scraping
  • Good knowledge of Beautiful Soup (Python library) and basic html elements and techniques used while scraping.
  • Have a good grasp of querying in NoSQL databases like MongoDB or Elasticsearch.
  • Basic experience and knowledge of creating REST APIs using Flask or Fast API
  • Basic knowledge of ML and Deep learning
  • A good grasp on the basic concepts of anyone git tool like Gitlab/Github
  • (Optional) Understanding of CI/CD paradigm
  • (Optional) Basic knowledge of docker
  • (Optional) Have worked with data orchestration tools like Celery and Airflow
  • (Optional) Have worked with Redis, Kafka, RabbitMQ

What you can expect

  • Attractive pay, bonus scheme and flexible vacation policy.
  • A truly flexible, trust-based, performance-driven work culture.
  • Lunch is on us, everyday!
  • A young and passionate team building elegant products with intricate technology for the future of retail and e-commerce. Our average age is below 25!
  • The chance to make a huge difference to the success of a world-class SaaS product and the opportunity to make an impact.

Its important to us

  • That you relocate to Indore
Job posted by
Nidhi Sharma

Growth Analyst

at Felicitycare

Founded 2018  •  Product  •  20-100 employees  •  Profitable
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
Data Analytics
Google Analytics
Microsoft Excel
SQL
icon
Remote, Jajpur
icon
1 - 3 yrs
icon
₹3L - ₹12L / yr

Responsibilities 

 

  • Identify, analyze, and interpret trends or patterns in complex data sets to develop a thorough understanding of users, and acquisition channels.
  • Run exploratory analysis uncover new areas of opportunity, generate hypotheses, and quickly assess the potential upside of a given opportunity.
  • Help execute projects to drive insights that lead to growth.
  • Work closely with marketing, design, product, support, and engineering to anticipate analytics needs and to quantify the impact of existing features, future product changes, and marketing campaigns.
  • Work with data engineering to develop and implement new analytical tools and improve our underlying data infrastructure. Build tracking plans for new and existing products and work with engineering to ensure proper
  • Analyze, forecast, and build custom reports to make key performance indicators and insights available to the entire company.
  • Monitor, optimize, and report on marketing and growth metrics and split-test results. Make recommendations based on analytics and test findings.
  • Drive optimization and data minded culture inside the company.
  • Develop frameworks, models, tools, and processes to ensure that analytical insights can be incorporated into all key decision making.
  • Effectively present and communicate analysis to the company to drive business decisions.
  • Create a management dashboard including all important KPIs to be tracked on a company and department level
  • Establish end to end campaign ROI tracking mechanism to attribute sales to specific google and Facebook campaigns



Skills and Experience


  • Minimum 1-2 years of proven work experience in a data analyst role.
  • Excellent analytical skills and problem-solving ability; the ability to answer unstructured business questions and work independently to drive projects to conclusion.
  • Strong analytical skills with the capacity to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Experience extracting insights using advanced SQL or similar tool to work efficiently at scale. Advanced expertise with commonly used analytics tools including Google Analytics, studio and Excel.
  • Strong knowledge of statistics, this includes experimental design for optimization, statistical significance, confidence intervals and predictive analytics techniques.
  • Must be self-directed, organized and detail oriented as well as have the ability to multitask and work effectively in a fast-paced environment.
  • Active team player, excellent communication skills, positive attitude and good work ethic.
Job posted by
Shikha Patni

Sr. Analyst - SAS Modelling

at Analytics and IT MNC

Agency job
via Questworkx
SAS
SQL
Linear regression
Logistic regression
Data Science
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹15L - ₹25L / yr
Responsibilities
- Design, Development and Optimization of Anti-Money Laundering Scenarios using statistical tools such as SAS
- Experience in extracting and manipulating large data sets
- Proficiency in analysing data using Statistical techniques
- Experience in summarizing and visualizing analysis results
- Machine learning and Anti - Fraud modelling experience would be a plus
- Should have experience over SAS AML, SAS Programming, R, Python, Analytics and BI tools like SAS VA
- Consultant must have be ready for short-term and long-term travel across India and out of India for project implementation

Must haves:
- SAS, SQL 
- Modeling techniques (linear regression, logistic regression)
- Data Science, Excel, Powerpoint
- BFSI Domain Experience
Job posted by
Jyoti Garach

Senior Data Engineer

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
PostgreSQL
DevOps
Amazon Web Services (AWS)
NodeJS (Node.js)
Ruby on Rails (ROR)
React.js
Python
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Job posted by
Newali Hazarika

Data Analyst

at Cloth software company

Agency job
via Jobdost
SQL
Data Analytics
icon
Delhi
icon
1 - 3 yrs
icon
₹1L - ₹6L / yr

What you will do:

  • Understand the process of CaaStle business teams, KPIs, and pain points
  • Build scalable data products, self-service tools, data cubes to analyze and present data associated with acquisition, retention, product performance, operations, client services, etc.
  • Closely partner with data engineering, product, and business teams and participate in requirements capture, research design, data collection, dashboard generation, and translation of results into actionable insights that can add value for business stakeholders
  • Leverage advanced analytics to drive key success metrics for business and revenue generation
  • Operationalize, implement, and automate changes to drive data-driven decisions
  • Attend and play an active role in answering questions from the executive and/or business teams through data mining and analysis

We would love for you to have:

  • Education: Advanced degree in Computer Science, Statistics, Mathematics, Engineering, Economics, Business Analytics or related field is required
  • Experience: 2-4 years of professional experience
  • Proficiency in data visualization/reporting tools (i.e. Tableau, Qlikview, etc.)
  • Experience in A/B testing and measure performance of experiments
  • Strong proficiency with SQL-based languages. Experience with large scale data analytics technologies (i.e., Hadoop and Spark)
  • Strong analytical skills and business mindset with the ability to translate complex concepts and analysis into clear and concise takeaways to drive insights and strategies
  • Excellent communication, social, and presentation skills with meticulous attention to detail
  • Programming experience in Python, R, or other languages
  • Knowledge of Data mining, statistical modeling approaches, and techniques

 

CaaStle is committed to equality of opportunity in employment. It has been and will continue to be the policy of CaaStle to provide full and equal employment opportunities to all employees and candidates for employment without regard to race, color, religion, national or ethnic origin, veteran status, age, sexual orientation, gender identity, or physical or mental disability. This policy applies to all terms, conditions and privileges of employment, such as those pertaining to training, transfer, promotion, compensation and recreational programs.

Job posted by
Sathish Kumar

Data Engineer - Azure

at Global consulting company

Agency job
via HyringNinja
SQL Azure
Data migration
Windows Azure
Big Data
PySpark
Relational Database (RDBMS)
ETL
Amazon Web Services (AWS)
icon
Pune
icon
2 - 5 yrs
icon
₹10L - ₹30L / yr

2-4 years of experience in developing ETL activities for Azure – Big data, relational databases, and data warehouse solutions.

 

Extensive hands-on experience implementing data migration and data processing using Azure services: ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Analysis Service, Azure Databricks, Azure Data Catalog, ML Studio, AI/ML, Snowflake, etc.

 

Well versed in DevOps and CI/CD deployments

 

Cloud migration methodologies and processes including tools like Azure Data Factory, Data Migration Service, SSIS, etc.

 

Minimum of 2 years of RDBMS experience

 

Experience with private and public cloud architectures, pros/cons, and migration considerations.

 

Nice-to-Have Skills/Qualifications:

 

- DevOps on an Azure platform

- Experience developing and deploying ETL solutions on Azure

- IoT, event-driven, microservices, Containers/Kubernetes in the cloud

- Familiarity with the technology stack available in the industry for metadata management: Data Governance, Data Quality, MDM, Lineage, Data Catalog etc.

- Multi-cloud experience a plus - Azure, AWS, Google

   

Professional Skill Requirements

Proven ability to build, manage and foster a team-oriented environment

Proven ability to work creatively and analytically in a problem-solving environment

Desire to work in an information systems environment

Excellent communication (written and oral) and interpersonal skills

Excellent leadership and management skills

Excellent organizational, multi-tasking, and time-management skills

Job posted by
Thomas G

Data Analyst

at Games 24x7

Agency job
via zyoin
PowerBI
Big Data
Hadoop
Apache Hive
Business Intelligence (BI)
Data Warehouse (DWH)
SQL
Python
Tableau
Java
icon
Bengaluru (Bangalore)
icon
0 - 6 yrs
icon
₹10L - ₹21L / yr
Location: Bangalore
Work Timing: 5 Days A Week

Responsibilities include:

• Ensure right stakeholders gets right information at right time
• Requirement gathering with stakeholders to understand their data requirement
• Creating and deploying reports
• Participate actively in datamarts design discussions
• Work on both RDBMS as well as Big Data for designing BI Solutions
• Write code (queries/procedures) in SQL / Hive / Drill that is both functional and elegant,
following appropriate design patterns
• Design and plan BI solutions to automate regular reporting
• Debugging, monitoring and troubleshooting BI solutions
• Creating and deploying datamarts
• Writing relational and multidimensional database queries
• Integrate heterogeneous data sources into BI solutions
• Ensure Data Integrity of data flowing from heterogeneous data sources into BI solutions.

Minimum Job Qualifications:
• BE/B.Tech in Computer Science/IT from Top Colleges
• 1-5 years of experience in Datawarehousing and SQL
• Excellent Analytical Knowledge
• Excellent technical as well as communication skills
• Attention to even the smallest detail is mandatory
• Knowledge of SQL query writing and performance tuning
• Knowledge of Big Data technologies like Apache Hadoop, Apache Hive, Apache Drill
• Knowledge of fundamentals of Business Intelligence
• In-depth knowledge of RDBMS systems, Datawarehousing and Datamarts
• Smart, motivated and team oriented
Desirable Requirements
• Sound knowledge of software development in Programming (preferably Java )
• Knowledge of the software development lifecycle (SDLC) and models
Job posted by
Shubha N
Data engineering
Python
SQL
Spark
PySpark
Cassandra
Groovy
Amazon Web Services (AWS)
Amazon S3
Windows Azure
Foundry
Good Clinical Practice
E2
R
palantir
icon
Bengaluru (Bangalore), Pune, Noida, NCR (Delhi | Gurgaon | Noida)
icon
7 - 10 yrs
icon
₹20L - ₹25L / yr
  1. Sr. Data Engineer:

 Core Skills – Data Engineering, Big Data, Pyspark, Spark SQL and Python

Candidate with prior Palantir Cloud Foundry OR Clinical Trial Data Model background is preferred

Major accountabilities:

  • Responsible for Data Engineering, Foundry Data Pipeline Creation, Foundry Analysis & Reporting, Slate Application development, re-usable code development & management and Integrating Internal or External System with Foundry for data ingestion with high quality.
  • Have good understanding on Foundry Platform landscape and it’s capabilities
  • Performs data analysis required to troubleshoot data related issues and assist in the resolution of data issues.
  • Defines company data assets (data models), Pyspark, spark SQL, jobs to populate data models.
  • Designs data integrations and data quality framework.
  • Design & Implement integration with Internal, External Systems, F1 AWS platform using Foundry Data Connector or Magritte Agent
  • Collaboration with data scientists, data analyst and technology teams to document and leverage their understanding of the Foundry integration with different data sources - Actively participate in agile work practices
  • Coordinating with Quality Engineer to ensure the all quality controls, naming convention & best practices have been followed

Desired Candidate Profile :

  • Strong data engineering background
  • Experience with Clinical Data Model is preferred
  • Experience in
    • SQL Server ,Postgres, Cassandra, Hadoop, and Spark for distributed data storage and parallel computing
    • Java and Groovy for our back-end applications and data integration tools
    • Python for data processing and analysis
    • Cloud infrastructure based on AWS EC2 and S3
  • 7+ years IT experience, 2+ years’ experience in Palantir Foundry Platform, 4+ years’ experience in Big Data platform
  • 5+ years of Python and Pyspark development experience
  • Strong troubleshooting and problem solving skills
  • BTech or master's degree in computer science or a related technical field
  • Experience designing, building, and maintaining big data pipelines systems
  • Hands-on experience on Palantir Foundry Platform and Foundry custom Apps development
  • Able to design and implement data integration between Palantir Foundry and external Apps based on Foundry data connector framework
  • Hands-on in programming languages primarily Python, R, Java, Unix shell scripts
  • Hand-on experience in AWS / Azure cloud platform and stack
  • Strong in API based architecture and concept, able to do quick PoC using API integration and development
  • Knowledge of machine learning and AI
  • Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

 Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision

Job posted by
RAHUL BATTA

Business Analyst

at Catalyst IQ

Founded 2019  •  Services  •  20-100 employees  •  Raised funding
Tableau
SQL
MS-Excel
Python
Data Analytics
Data Analysis
Data Visualization
icon
Mumbai, Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹15L - ₹25L / yr
Responsibilities:
● Ability to do exploratory analysis: Fetch data from systems and analyze trends.
● Developing customer segmentation models to improve the efficiency of marketing and product
campaigns.
● Establishing mechanisms for cross functional teams to consume customer insights to improve
engagement along the customer life cycle.
● Gather requirements for dashboards from business, marketing and operations stakeholders.
● Preparing internal reports for executive leadership and supporting their decision making.
● Analyse data, derive insights and embed it into Business actions.
● Work with cross functional teams.
Skills Required
• Data Analytics Visionary.
• Strong in SQL & Excel and good to have experience in Tableau.
• Experience in the field of Data Analysis, Data Visualization.
• Strong in analysing the Data and creating dashboards.
• Strong in communication, presentation and business intelligence.
• Multi-Dimensional, "Growth Hacker" Skill Set with strong sense of ownership for work.
• Aggressive “Take no prisoners” approach.
Job posted by
Sidharth Maholia
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Information Technology Services?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort