Data Engineer

at NeenOpal Intelligent Solutions Private Limited

DP
Posted by Pavel Gupta
icon
Remote, Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹6L - ₹12L / yr
icon
Full time
Skills
ETL
Python
Amazon Web Services (AWS)
SQL
PostgreSQL

We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.

 

Requirements
  • 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
  • Experience using Python to automate ETL/Data Processes jobs.
  • Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
  • Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
  • Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
  • Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
  • Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
  • Solid experience with data modeling, business logic, and RESTful APIs.
  • Solid experience in the Linux environment.
  • Experience with NoSQL / PostgreSQL preferred
  • Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
  • Experience with NGINX and SSL.
  • Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
Read more

About NeenOpal Intelligent Solutions Private Limited

NeenOpal is a global management consulting firm with a unique and specialized focus on Data Science. We provide services across the whole value chain of an organization - Digital Strategy, Sales & Marketing, Supply Chain & Logistics as well as Finance. Discover how NeenOpal can help you achieve operational efficiency using Data Science.
Read more
Founded
2016
Type
Services
Size
20-100 employees
Stage
Bootstrapped
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Scientist

at 6sense

Founded 2013  •  Product  •  1000-5000 employees  •  Raised funding
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Linear regression
Time series
Python
icon
Remote, Bengaluru (Bangalore), Pune
icon
6 - 9 yrs
icon
₹25L - ₹40L / yr
  • Experience in data cleansing and preparation concepts and tools.

  • Hands-on NLP experience

  • Programming Languages: Hands-on with Python and Java.

  • Strong background in relational databases, data modelling and

    SQL.

  • Experience working with statistical libraries (e.g scikit-learn) and

    frameworks for predictive analytics.

  • Knowledge of Hadoop and UNIX will be a plus.

  • Expertise in probability and statistics, time-series analysis as well

    as experience in the use of machine learning methods, for example, linear regression, correlation, statistical significance, and so forth.

  • Experience in data visualization concepts and tools

  • Knowledge of Kibana and/or other reporting tool

  • Knowledge of Data Quality ControlsDesired Skills and Experience

  • Python,SQL,Java,Data Visualization,Scikit-learn

Read more
Job posted by
Sanish Bhadbhade

GCP Data Engineer

at Beyond Human Resource

Founded 2017  •  Services  •  employees  •  Raised funding
Google Cloud Platform (GCP)
Data flow
Python
SQL server
Apache Beam
BigQuery
icon
Remote only
icon
4 - 6 yrs
icon
₹8L - ₹12L / yr
Primary Skills:
1. Working knowledge on GCP (Cloud Storage, Cloud functions, Firestore, DAG, Airflow/Cloud Composer, python, apache beam, bigquery)
2. Knowledge on Google BQ and DBT
3. Good to have Python scripting knowledge for Data Engineering
4. Terraform knowledge will be an added advantage
5. Knowledge on Data warehousing is a must

Secondary Skills:

6. Data Analysis, Knowledge on Teradata (BTEQ, Mload)
7. ETL or ELT process
8. Building CI/CD pipeline, containerization etc
9. Agile ways of working

Team Lead and Process:

1. Ensuring adherence to schedule and quality of activities related to design, build, testing and implementation of deliverables.
2. Participate in requirement elicitation, validation of architecture, creation and review of design.
3. Provide support to the team like pseudocode to team, coordinating with architects to resolve blockers, assigning and reviewing tasks and ensure quality and timelines are met.
Read more
Job posted by
Priya Sahni

Data Scientist

at client of peoplefirst consultants

Machine Learning (ML)
Data Science
Deep Learning
Artificial Intelligence (AI)
Python
SQL
icon
Remote, Chennai
icon
3 - 6 yrs
icon
Best in industry

Skills: Machine Learning,Deep Learning,Artificial Intelligence,python.

Location:Chennai


Domain knowledge:
Data cleaning, modelling, analytics, statistics, machine learning, AI

Requirements:

·         To be part of Digital Manufacturing and Industrie 4.0 projects across Saint Gobain group of companies

·         Design and develop AI//ML models to be deployed across SG factories

·         Knowledge on Hadoop, Apache Spark, MapReduce, Scala, Python programming, SQL and NoSQL databases is required

·         Should be strong in statistics, data analysis, data modelling, machine learning techniques and Neural Networks

·         Prior experience in developing AI and ML models is required

·         Experience with data from the Manufacturing Industry would be a plus

Roles and Responsibilities:

·         Develop AI and ML models for the Manufacturing Industry with a focus on Energy, Asset Performance Optimization and Logistics

·         Multitasking, good communication necessary

·         Entrepreneurial attitude.

 
Read more
Job posted by
Aishwarya KA

Senior Data Scientist

at Deepintent

Founded 2015  •  Product  •  20-100 employees  •  Profitable
Forecasting
Time series
Predictive modelling
Python
SQL
Spark
icon
Pune
icon
3 - 8 yrs
icon
₹30L - ₹40L / yr
What We Are Looking For:
The Sr. Data Scientist will be located in Pune, India or alternative location and working closely with our Analytics teams in New York City, India, and Bosnia. The role will be part of our Clinical Insights line of analytics, seeking to support internal and external Business partners in generating analyses and insights of Outcomes product (measurement of campaign outcomes / script lift), as well as general Deep Intent product suite. Activities in this position include conducting exploratory data analysis / discovery, creating and scoring audiences, reading campaign results by analyzing medical claims, clinical, demographic and clickstream data; performing analysis and creating actionable insights, summarizing them and presenting results and recommended actions to internal stakeholders and external clients, as needed. This role will report directly to the Sr. Director of Outcomes Insights.
Key Responsibilities:
  • Time-series modeling and forecasting
  • Predictive modeling (e.g. xgboost, deep learning) on large datasets
  • Building data ingestion pipelines and transform data into metrics useful for analytics and modeling
  • Hypothesis Testing, Experimental Design & AB Testing
  • Write production level code in Python,, SQL in BigQuery/Spark and Git experience
  • Support business development and client analytics and insights process, under supervision of the director / sr. data scientist, utilizing consumer demographic, clickstream and clinical data (claims and medications)
  • Core activities to include: Campaign audience sizing estimates, generating lookalike & campaign audiences, generating standardized reporting deliverables on media performance, and packaging insights into relevant client stories
  • Extract, explore, visualize and analyze large healthcare claims data, consumer demographic, prospecting and clickstream data using SQL, Python or R libraries.
  • Generate scripts for audience creation using SQL, Python / R and API call infrastructure.
  • Understand objectives of client campaigns, audience selection (diagnostics), creative and channel.
  • Support internal product development of data tools, dashboards and forecasts, as needed.
Who You Are:
  • You have a working understanding of the ad-tech / digital marketing and advertising data and campaigns, and interest (and aptitude) for learning US healthcare patient and provider systems (e.g. medical claims, medications etc.). 
  • Desire to work in a rapidly growing and scaling startup, with a strong culture of fast-paced cross functional collaboration.
  • Hands-on predictive modeling experience (decision trees, boosting algorithms and regression models).
  • Orientation and interest in translating complex quantitative results into meaningful findings and interpretable deliverables, and communicate with the less technical audience.
  • Hypothesis oriented curiosity and tenacity in obtaining meaningful results through iterative data analysis and data prep. 
  • “Can do” attitude, outstanding technical troubleshooting and problem-solving abilities, aptitude to rapidly develop working knowledge of new tools, open source libraries, data sources etc.
  • Ability to meet deadlines and flexibility to work constructively with shifting priorities.
  • You have strong communication & presentation skills backed with strong hold of critical thinking.
What You’ve Accomplished:
  • Bachelor’s degree in a STEM field, such as Statistics, Mathematics, Engineering, Biostatistics, Econometrics, Economics, Finance, or Data Science.
  • Minimum of 5 years of working experience as Data Analyst, Engineer, Data Scientist or Researcher in digital marketing, consumer advertisement, telecom, healthcare or other areas requiring customer level predictive analytics.
  • Proficiency in performing statistical analysis in R or Python, including relevant libraries is required. Prior experience in using these tools in analytical R&D strongly preferred.
  • Advanced ability to use relevant technology/software to wrangle data, perform analytics, and visualize for consumption is required.
  • Experience with SQL is required.
  • Advanced experience: with basic Office Suite (Excel, Powerpoint) is required.
  • Familiarity with medical and healthcare data preferred (medical claims, Rx, etc.).
  • Experience with cloud technologies such as AWS or Google Cloud, required
  • Exposure to big data tools (hadoop, pyspark) is preferred.
  • Experience with Git/version control and Jira/ticketing system is strongly preferred.
  • Experience with a visualization tool such as Looker and / or Tableau, preferred.
Read more
Job posted by
Indrajeet Deshmukh

Data Architect

at Hypersonix Inc

Founded 2018  •  Product  •  100-500 employees  •  Profitable
Big Data
Data Warehouse (DWH)
Apache Kafka
Spark
Hadoop
Data engineering
Artificial Intelligence (AI)
Machine Learning (ML)
Data Structures
Data modeling
Data wrangling
Data integration
Data-driven testing
Database performance tuning
Apache Storm
Python
Scala
SQL
Amazon Web Services (AWS)
SQL Azure
kafka
databricks
Flinks
druid
Airflow
Luigi
Nifi
Talend
icon
Bengaluru (Bangalore)
icon
10 - 15 yrs
icon
₹15L - ₹20L / yr
Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.

Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.

Primary Responsibilities :

- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions

- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms

- Develop and deliver solutions with data streaming capabilities for a large volume of data

- Design, code and maintain parts of the product and drive customer adoption

- Build data acquisition strategy to onboard customer data with speed and accuracy

- Working both independently and with team members to develop, refine, implement, and scale ETL processes

- On-going support and maintenance of live-clients for their data and analytics needs

- Defining the data automation architecture to drive self-service data load capabilities

Required Qualifications :

- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines

- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.

- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning

- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop

- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)

- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc

- Familiarity with cloud-based platforms like AWS, Azure or GCP

- Experience with cloud data warehouses like Redshift and Snowflake

- Proficient in writing complex SQL queries.

- Excellent communication skills and prior experience of working closely with customers

- Data savvy who loves to understand large data trends and obsessed with data analysis

- Desire to learn about, explore, and invent new tools for solving real-world problems using data

Desired Qualifications :

- Cloud computing experience, Amazon Web Services (AWS)

- Prior experience in Data Warehousing concepts, multi-dimensional data models

- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards

- Prior experience in managing client implementation of Analytics projects

- Knowledge and prior experience of using machine learning tools
Read more
Job posted by
Gowshini Maheswaran

Data Engineer

at Srijan Technologies

Founded 2002  •  Products & Services  •  100-1000 employees  •  Profitable
PySpark
SQL
Data modeling
Data Warehouse (DWH)
Informatica
ETL
Python
icon
Remote only
icon
2 - 6 yrs
icon
₹8L - ₹13L / yr
3+ years of professional work experience with a reputed analytics firm
 Expertise in handling large amount of data through Python or PySpark
 Conduct data assessment, perform data quality checks and transform data using SQL
and ETL tools
 Experience of deploying ETL / data pipelines and workflows in cloud technologies and
architecture such as Azure and Amazon Web Services will be valued
 Comfort with data modelling principles (e.g. database structure, entity relationships, UID
etc.) and software development principles (e.g. modularization, testing, refactoring, etc.)
 A thoughtful and comfortable communicator (verbal and written) with the ability to
facilitate discussions and conduct training
 Track record of strong problem-solving, requirement gathering, and leading by example
 Ability to thrive in a flexible and collaborative environment
 Track record of completing projects successfully on time, within budget and as per scope
Read more
Job posted by
PriyaSaini

SQL Developer

at Hex Business Innovations

Founded 2006  •  Products & Services  •  0-20 employees  •  Profitable
SQL
SQL server
MySQL
MS SQLServer
C#
.NET
icon
Faridabad
icon
0 - 4 yrs
icon
₹1L - ₹3L / yr

Job Summary
SQL development for our Enterprise Resource Planning (ERP) Product offered to SMEs. Regular modifications , creation and validation with testing of stored procedures , views, functions on MS SQL Server.
Responsibilities and Duties
Understanding the ERP Software and use cases.
Regular Creation,modifications and testing of

  • Stored Procedures
  • Views
  • Functions
  • Nested Queries
  • Table and Schema Designs

Qualifications and Skills
MS SQL

  • Procedural Language
  • Datatypes
  • Objects
  • Databases
  • Schema
Read more
Job posted by
Dhruv Dua

Data Engineer

at Cloud infrastructure solutions and support company. (SE1)

Agency job
via Multi Recruit
SQL
ETL
Data engineering
Big Data
Java
Python
PySpark
icon
Pune
icon
2 - 6 yrs
icon
₹12L - ₹16L / yr
  • Design, create, test, and maintain data pipeline architecture in collaboration with the Data Architect.
  • Build the infrastructure required for extraction, transformation, and loading of data from a wide variety of data sources using Java, SQL, and Big Data technologies.
  • Support the translation of data needs into technical system requirements. Support in building complex queries required by the product teams.
  • Build data pipelines that clean, transform, and aggregate data from disparate sources
  • Develop, maintain and optimize ETLs to increase data accuracy, data stability, data availability, and pipeline performance.
  • Engage with Product Management and Business to deploy and monitor products/services on cloud platforms.
  • Stay up-to-date with advances in data persistence and big data technologies and run pilots to design the data architecture to scale with the increased data sets of consumer experience.
  • Handle data integration, consolidation, and reconciliation activities for digital consumer / medical products.

Job Qualifications:

  • Bachelor’s or master's degree in Computer Science, Information management, Statistics or related field
  • 5+ years of experience in the Consumer or Healthcare industry in an analytical role with a focus on building on data pipelines, querying data, analyzing, and clearly presenting analyses to members of the data science team.
  • Technical expertise with data models, data mining.
  • Hands-on Knowledge of programming languages in Java, Python, R, and Scala.
  • Strong knowledge in Big data tools like the snowflake, AWS Redshift, Hadoop, map-reduce, etc.
  • Having knowledge in tools like AWS Glue, S3, AWS EMR, Streaming data pipelines, Kafka/Kinesis is desirable.
  • Hands-on knowledge in SQL and No-SQL database design.
  • Having knowledge in CI/CD for the building and hosting of the solutions.
  • Having AWS certification is an added advantage.
  • Having Strong knowledge in visualization tools like Tableau, QlikView is an added advantage
  • A team player capable of working and integrating across cross-functional teams for implementing project requirements. Experience in technical requirements gathering and documentation.
  • Ability to work effectively and independently in a fast-paced agile environment with tight deadlines
  • A flexible, pragmatic, and collaborative team player with the innate ability to engage with data architects, analysts, and scientists
Read more
Job posted by
Ranjini A R

Sr. Data Analyst

at Global internet of things connected solutions provider(H1)

Agency job
via Multi Recruit
Data Analyst
Data Analytics
MySQL
PostgreSQL
Data Analysis
EXCEL
icon
Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹12L - ₹15L / yr
  • Managing and designing the reporting environment, including data sources, security, and metadata.
  • Preparing reports for executive leadership that effectively communicate trends, patterns, and predictions using relevant data
  • Establish KPIs to measure the effectiveness of business decisions.
  • Work with management to prioritize business and information needs.
  • Provide data solutions, tools, and capabilities to enable self-service frameworks for data consumers
  • Provide expertise and translate the business needs to design; and develop tools, techniques, and metrics, and dashboards for insights and data visualization.
  • Responsible for developing and executing tools to monitor and report on data quality.
  • Responsible for establishing appreciation and adherence to the principles of data quality management, including metadata, lineage, and business definitions
  • Provide support to Tech teams in managing security mechanisms and data access governance
  • Provides technical support and mentoring and training to less senior analysts.
  • Derive insights through A/B tests, funnel analysis, and user segmentation

Key Criteria:

  • 3+ years in a data analyst position, preferably working as a Data Analyst in a fast-paced and dynamic business setting.
  • Strong SQL-based querying languages (MYSQL, PostgreSQL) and Excel skills with the ability to learn other analytic tools.
  • Understanding of Scripting experience in (Python, Perl, JavaScript, Shell);
  • Skilled in statistical and econometric modeling, performing quantitative analysis, and technological data mining and analysis techniques.
  • This role requires a mixture of data schema knowledge and technical writing activities paired with hands-on and collaborative work with Systems Analysts. Technical exposure through requirements, QA, or development software lifecycles are also a plus
  • Demonstrated analytical skills. Ability to work with large amounts of data: facts, figures, and number crunching. Ability to see through the data and analyze it to find conclusions.
  • Excellent attention to detail. Data needs to be precise. Conclusions drawn from data analysis will drive critical client decisions
    Domain knowledge in the Internet of Things is a plus
  • Managing a junior team of analysts. It is crucial that they have exceptional writing and verbal communication skills to perform their job duties and manage others.
Qualifications:
  • B.E/ B.Tech./ M. E/ M. Tech from any recognized university in India.
  • Minimum 60% in Graduation or Post-Graduation
  • SQL knowledge and hands-on experience is a must.
  • Great interpersonal and communication skill
Read more
Job posted by
Santhosh Kumar KR

ETL Engineer - Data Pipeline

at DataToBiz

Founded 2018  •  Services  •  20-100 employees  •  Bootstrapped
ETL
Amazon Web Services (AWS)
Amazon Redshift
Python
icon
Chandigarh, NCR (Delhi | Gurgaon | Noida)
icon
2 - 6 yrs
icon
₹7L - ₹15L / yr
Job Responsibilities : - Developing new data pipelines and ETL jobs for processing millions of records and it should be scalable with growth.
Pipelines should be optimised to handle both real time data, batch update data and historical data.
Establish scalable, efficient, automated processes for complex, large scale data analysis.
Write high quality code to gather and manage large data sets (both real time and batch data) from multiple sources, perform ETL and store it in a data warehouse.
Manipulate and analyse complex, high-volume, high-dimensional data from varying sources using a variety of tools and data analysis techniques.
Participate in data pipelines health monitoring and performance optimisations as well as quality documentation.
Interact with end users/clients and translate business language into technical requirements.
Acts independently to expose and resolve problems.

Job Requirements :-
2+ years experience working in software development & data pipeline development for enterprise analytics.
2+ years of working with Python with exposure to various warehousing tools
In-depth working with any of commercial tools like AWS Glue, Ta-lend, Informatica, Data-stage, etc.
Experience with various relational databases like MySQL, MSSql, Oracle etc. is a must.
Experience with analytics and reporting tools (Tableau, Power BI, SSRS, SSAS).
Experience in various DevOps practices helping the client to deploy and scale the systems as per requirement.
Strong verbal and written communication skills with other developers and business client.
Knowledge of Logistics and/or Transportation Domain is a plus.
Hands-on with traditional databases and ERP systems like Sybase and People-soft.
Read more
Job posted by
PS Dhillon
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at NeenOpal Intelligent Solutions Private Limited?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort