Data Engineer

at SenecaGlobal

DP
Posted by Shiva V
icon
Remote, Hyderabad
icon
4 - 6 yrs
icon
₹15L - ₹20L / yr
icon
Full time
Skills
Python
PySpark
Spark
Scala
Microsoft Azure Data factory
Should have good experience with Python or Scala/PySpark/Spark/
• Experience with Advanced SQL
• Experience with Azure data factory, data bricks,
• Experience with Azure IOT, Cosmos DB, BLOB Storage
• API management, FHIR API development,
• Proficient with Git and CI/CD best practices
• Experience working with Snowflake is a plus
Read more

About SenecaGlobal

Whether you’re moving to the cloud, creating a new solution or transforming existing technology investments, SenecaGlobal can help.
Read more
Founded
2007
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer - Global Media Agency

at client of Merito

Agency job
via Merito
Python
SQL
Tableau
PowerBI
PHP
snowflake
Data engineering
icon
Mumbai
icon
3 - 8 yrs
icon
Best in industry

Our client is the world’s largest media investment company and are a part of WPP. In fact, they are responsible for one in every three ads you see globally. We are currently looking for a Senior Software Engineer to join us. In this role, you will be responsible for coding/implementing of custom marketing applications that Tech COE builds for its customer and managing a small team of developers.

 

What your day job looks like:

  • Serve as a Subject Matter Expert on data usage – extraction, manipulation, and inputs for analytics
  • Develop data extraction and manipulation code based on business rules
  • Develop automated and manual test cases for the code written
  • Design and construct data store and procedures for their maintenance
  • Perform data extract, transform, and load activities from several data sources.
  • Develop and maintain strong relationships with stakeholders
  • Write high quality code as per prescribed standards.
  • Participate in internal projects as required

 
Minimum qualifications:

  • B. Tech./MCA or equivalent preferred
  • Excellent 3 years Hand on experience on Big data, ETL Development, Data Processing.


    What you’ll bring:

  • Strong experience in working with Snowflake, SQL, PHP/Python.
  • Strong Experience in writing complex SQLs
  • Good Communication skills
  • Good experience of working with any BI tool like Tableau, Power BI.
  • Sqoop, Spark, EMR, Hadoop/Hive are good to have.

 

 

Read more
Job posted by
Merito Talent
Big Data
Big Data Engineer
Spark
Apache Spark
Scala
Apache Hive
Apache HBase
icon
Hyderabad, Pune, Chennai, Bengaluru (Bangalore), Mumbai
icon
5 - 6 yrs
icon
₹18L - ₹25L / yr

 

Experience :5--6+

 

 

Must Have

 

  • Apache Spark, Spark Streaming, Scala Programming, Apache HBASE   
  • Unix Scripting, SQL Knowledge
  • Good to Have.
  • Experience working with Graph Database preferably JANUS Graph DB
  • Experience working with Document Databases  and Apache SOLR

 

Job Description

Data Engineer  with Experience in the following area.

 

  • Designing and implementing high performance data ingestion pipelines from multiple sources using Scala and Apache Spark. 
  • Experience with event based  Spark Streaming technologies to ingest data. 
  • Developing Scalable and re-usable frameworks for ingesting data sets.
  • Integrating end to end data pipelines to take data from source systems to target data repositories ensuring the quality and consistency of data maintained at all times.
  • Preference for Big Data related Certifications like Cloudera Certified Professional CCP and Cloudera Certified Associate CCA
  • Working within Agile delivery methodology to deliver product implementation in iterative sprints.
  • Strong knowledge of Data Management principles

 

Location: PAN INDIA

Read more
Job posted by
Swagatika Sahoo

Data Scientist

at Disruptive Fintech Startup

Agency job
via Unnati
Data Science
Data Analytics
R Programming
Python
Investment analysis
credit rating
icon
Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹8L - ₹12L / yr
If you are interested in joining a purpose-driven community that is dedicated to creating ambitious and inclusive workplaces, then be a part of a high growth startup with a world-class team, building a revolutionary product!
 
Our client is a vertical fintech play focused on solving industry-specific financing gaps in the food sector through the application of data. The platform provides skin-in-the-game growth capital to much-loved F&B brands. Founded in 2019, they’re VC funded and based out of Singapore and India-Bangalore.
 
Founders are the alumnus of IIT-D, IIM-B and Wharton. They’ve 12+ years of experience as Venture capital and corporate entrepreneurship at DFJ, Vertex, InMobi and VP at Snyder UAE, investment banking at Unitus Capital - leading the financial services practice, and institutional equities at Kotak. They’ve a team of high-quality professionals coming together for this mission to disrupt the convention.
 
 
AsData Scientist, you will develop a first of its kind risk engine for revenue-based financing in India and automating investment appraisals for the company's different revenue-based financing products

What you will do:
 
  • Identifying alternate data sources beyond financial statements and implementing them as a part of assessment criteria
  • Automating appraisal mechanisms for all newly launched products and revisiting the same for an existing product
  • Back-testing investment appraisal models at regular intervals to improve the same
  • Complementing appraisals with portfolio data analysis and portfolio monitoring at regular intervals
  • Working closely with the business and the technology team to ensure the portfolio is performing as per internal benchmarks and that relevant checks are put in place at various stages of the investment lifecycle
  • Identifying relevant sub-sector criteria to score and rate investment opportunities internally

 


Candidate Profile:

 

Desired Candidate Profile

 

What you need to have:
 
  • Bachelor’s degree with relevant work experience of at least 3 years with CA/MBA (mandatory)
  • Experience in working in lending/investing fintech (mandatory)
  • Strong Excel skills (mandatory)
  • Previous experience in credit rating or credit scoring or investment analysis (preferred)
  • Prior exposure to working on data-led models on payment gateways or accounting systems (preferred)
  • Proficiency in data analysis (preferred)
  • Good verbal and written skills
Read more
Job posted by
Sarika Tamhane

Data Scientist

at Marktine

Founded 2014  •  Products & Services  •  20-100 employees  •  Bootstrapped
Data Science
R Programming
Python
SQL
Machine Learning (ML)
Natural Language Processing (NLP)
icon
Remote, Bengaluru (Bangalore)
icon
3 - 7 yrs
icon
₹10L - ₹24L / yr

Responsibilities:

  • Design and develop strong analytics system and predictive models
  • Managing a team of data scientists, machine learning engineers, and big data specialists
  • Identify valuable data sources and automate data collection processes
  • Undertake pre-processing of structured and unstructured data
  • Analyze large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modeling
  • Present information using data visualization techniques
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams

Requirements:

  • Proven experience as a seasoned Data Scientist
  • Good Experience in data mining processes
  • Understanding of machine learning and Knowledge of operations research is a value addition
  • Strong understanding and experience in R, SQL, and Python; Knowledge base with Scala, Java, or C++ is an asset
  • Experience using business intelligence tools (e. g. Tableau) and data frameworks (e. g. Hadoop)
  • Strong math skills (e. g. statistics, algebra)
  • Problem-solving aptitude
  • Excellent communication and presentation skills
  • Experience in Natural Language Processing (NLP)
  • Strong competitive coding skills
  • BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
Read more
Job posted by
Vishal Sharma

Big Data Engineer

at StatusNeo

Founded 2020  •  Products & Services  •  100-1000 employees  •  Profitable
Big Data
Data engineering
Scala
Apache Hive
Hadoop
Python
Linux/Unix
dbms
icon
Hyderabad, Bengaluru (Bangalore), Gurugram
icon
3 - 7 yrs
icon
₹2L - ₹20L / yr

Bigdata JD :

 

Data Engineer – SQL, RDBMS, pySpark/Scala, Python, Hive, Hadoop, Unix

 

Data engineering services required:

  • Builddataproducts and processes alongside the core engineering and technology team
  • Collaborate with seniordatascientists to curate, wrangle, and prepare data for use in their advanced analytical models
  • Integratedatafrom a variety of sources, assuring that they adhere to data quality and accessibility standards
  • Modify and improvedataengineering processes to handle ever larger, more complex, and more types of data sources and pipelines
  • Use Hadoop architecture and HDFS commands to design and optimizedataqueries at scale
  • Evaluate and experiment with noveldataengineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases

Big data engineering skills:

  • Demonstrated ability to perform the engineering necessary to acquire, ingest, cleanse, integrate, and structure massive volumes ofdatafrom multiple sources and systems into enterprise analytics platforms
  • Proven ability to design and optimize queries to build scalable, modular, efficientdatapipelines
  • Ability to work across structured, semi-structured, and unstructureddata, extracting information and identifying linkages across disparatedata sets
  • Proven experience delivering production-readydataengineering solutions, including requirements definition, architecture selection, prototype development, debugging, unit-testing, deployment, support, and maintenance
  • Ability to operate with a variety ofdataengineering tools and technologies; vendor agnostic candidates preferred

Domain and industry knowledge:

  • Strong collaboration and communication skills to work within and across technology teams and business units
  • Demonstrates the curiosity, interpersonal abilities, and organizational skills necessary to serve as a consulting partner, includes the ability to uncover, understand, and assess the needs of various business stakeholders
  • Experience with problem discovery, solution design, and insight delivery that involves frequent interaction, education, engagement, and evangelism with senior executives
  • Ideal candidate will have extensive experience with the creation and delivery of advanced analytics solutions for healthcare payers or insurance companies, including anomaly detection, provider optimization, studies of sources of fraud, waste, and abuse, and analysis of clinical and economic outcomes of treatment and wellness programs involving medical or pharmacy claimsdata, electronic medical recorddata, or other health data
  • Experience with healthcare providers, pharma, or life sciences is a plus

 

Read more
Job posted by
Alex P

Data Engineer

at MNC Company - Product Based

Agency job
via Bharat Headhunters
Data Warehouse (DWH)
Informatica
ETL
Python
Google Cloud Platform (GCP)
SQL
AIrflow
icon
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
5 - 9 yrs
icon
₹10L - ₹15L / yr

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
Job posted by
Ranjini C. N

Big Data Spark Lead

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
Apache Spark
Big Data
Spark
Scala
Hadoop
MapReduce
Java
Apache Hive
icon
Pune, Hyderabad
icon
7 - 12 yrs
icon
₹7L - ₹20L / yr
We at Datametica Solutions Private Limited are looking for Big Data Spark Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
  • Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
  • Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
  • Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
  • Proficient with various development methodologies like waterfall, agile/scrum and iterative
  • Good Interpersonal skills and excellent communication skills for US and UK based clients

About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.


We have our own products!
Eagle –
Data warehouse Assessment & Migration Planning Product
Raven –
Automated Workload Conversion Product
Pelican -
Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy

Check out more about us on our website below!
www.datametica.com
Read more
Job posted by
Sumangali Desai

Data Scientist

at upGrad

Founded 2015  •  Product  •  100-500 employees  •  Raised funding
Data Science
R Programming
Python
SQL
Natural Language Processing (NLP)
Machine Learning (ML)
Tableau
icon
Bengaluru (Bangalore), Mumbai
icon
4 - 6 yrs
icon
₹10L - ₹21L / yr

About Us

upGrad is an online education platform building the careers of tomorrow by offering the most industry-relevant programs in an immersive learning experience. Our mission is to create a new digital-first learning experience to deliver tangible career impact to individuals at scale. upGrad currently offers programs in Data Science, Machine Learning, Product Management, Digital Marketing, and Entrepreneurship, etc. upGrad is looking for people passionate about management and education to help design learning programs for working professionals to stay sharp and stay relevant and help build the careers of tomorrow.

  • upGrad was awarded the Best Tech for Education by IAMAI for 2018-19

  • upGrad was also ranked as one of the LinkedIn Top Startups 2018: The 25 most sought-

    after startups in India

  • upGrad was earlier selected as one of the top ten most innovative companies in India

    by FastCompany.

  • We were also covered by the Financial Times along with other disruptors in Ed-Tech

  • upGrad is the official education partner for Government of India - Startup India

    program

  • Our program with IIIT B has been ranked #1 program in the country in the domain of Artificial Intelligence and Machine Learning

     

    Role Summary

    Are you excited by the challenge and the opportunity of applying data-science and data- analytics techniques to the fast developing education technology domain? Do you look forward to, the sense of ownership and achievement that comes with innovating and creating data products from scratch and pushing it live into Production systems? Do you want to work with a team of highly motivated members who are on a mission to empower individuals through education?
    If this is you, come join us and become a part of the upGrad technology team. At upGrad the technology team enables all the facets of the business - whether it’s bringing efficiency to ourmarketing and sales initiatives, to enhancing our student learning experience, to empowering our content, delivery and student success teams, to aiding our student’s for their desired careeroutcomes. We play the part of bringing together data & tech to solve these business problems and opportunities at hand.
    We are looking for an highly skilled, experienced and passionate data-scientist who can come on-board and help create the next generation of data-powered education tech product. The ideal candidate would be someone who has worked in a Data Science role before wherein he/she is comfortable working with unknowns, evaluating the data and the feasibility of applying scientific techniques to business problems and products, and have a track record of developing and deploying data-science models into live applications. Someone with a strong math, stats, data-science background, comfortable handling data (structured+unstructured) as well as strong engineering know-how to implement/support such data products in Production environment.
    Ours is a highly iterative and fast-paced environment, hence being flexible, communicating well and attention-to-detail are very important too. The ideal candidate should be passionate about the customer impact and comfortable working with multiple stakeholders across the company.


    Roles & Responsibilities

      • 3+ years of experience in analytics, data science, machine learning or comparable role
      • Bachelor's degree in Computer Science, Data Science/Data Analytics, Math/Statistics or related discipline 
      • Experience in building and deploying Machine Learning models in Production systems
      • Strong analytical skills: ability to make sense out of a variety of data and its relation/applicability to the business problem or opportunity at hand
      • Strong programming skills: comfortable with Python - pandas, numpy, scipy, matplotlib; Databases - SQL and noSQL
      • Strong communication skills: ability to both formulate/understand the business problem at hand as well as ability to discuss with non data-science background stakeholders 
      • Comfortable dealing with ambiguity and competing objectives

       

      Skills Required

      • Experience in Text Analytics, Natural Language Processing

      • Advanced degree in Data Science/Data Analytics or Math/Statistics

      • Comfortable with data-visualization tools and techniques

      • Knowledge of AWS and Data Warehousing

      • Passion for building data-products for Production systems - a strong desire to impact

        the product through data-science technique

Read more
Job posted by
Priyanka Muralidharan

Principal Data Scientist

at Antuit

Founded 2013  •  Product  •  100-500 employees  •  Profitable
Data Science
Machine Learning (ML)
Artificial Intelligence (AI)
Data Scientist
Python
PyTorch
Supply Chain Management (SCM)
Time series
Demand forecasting
MLOPs
C++
Java
TensorFlow
Kubernetes
icon
Bengaluru (Bangalore)
icon
8 - 12 yrs
icon
₹25L - ₹30L / yr

About antuit.ai

 

Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.

 

Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.

 

The Role:

 

Antuit.ai is interested in hiring a Principal Data Scientist, this person will facilitate standing up standardization and automation ecosystem for ML product delivery, he will also actively participate in managing implementation, design and tuning of product to meet business needs.

 

Responsibilities:

 

Responsibilities includes, but are not limited to the following:

 

  • Manage and provides technical expertise to the delivery team. This includes recommendation of solution alternatives, identification of risks and managing business expectations.
  • Design, build reliable and scalable automated processes for large scale machine learning.
  • Use engineering expertise to help design solutions to novel problems in software development, data engineering, and machine learning. 
  • Collaborate with Business, Technology and Product teams to stand-up MLOps process.
  • Apply your experience in making intelligent, forward-thinking, technical decisions to delivery ML ecosystem, including implementing new standards, architecture design, and workflows tools.
  • Deep dive into complex algorithmic and product issues in production
  • Own metrics and reporting for delivery team. 
  • Set a clear vision for the team members and working cohesively to attain it.
  • Mentor and coach team members


Qualifications and Skills:

 

Requirements

  • Engineering degree in any stream
  • Has at least 7 years of prior experience in building ML driven products/solutions
  • Excellent programming skills in any one of the language C++ or Python or Java.
  • Hands on experience on open source libraries and frameworks- Tensorflow,Pytorch, MLFlow, KubeFlow, etc.
  • Developed and productized large-scale models/algorithms in prior experience
  • Can drive fast prototypes/proof of concept in evaluating various technology, frameworks/performance benchmarks.
  • Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
  • Good verbal, written and presentation skills.
  • Ability to learn new skills and technologies.
  • 3+ years working with retail or CPG preferred.
  • Experience in forecasting and optimization problems, particularly in the CPG / Retail industry preferred.

 

Information Security Responsibilities

 

  • Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
  • Take part in Information Security training and act accordingly while handling information.
  • Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).

EEOC

 

Antuit.ai is an at-will, equal opportunity employer.  We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
Read more
Job posted by
Purnendu Shakunt

Fullstack Developer

at INSTAFUND INTERNET PRIVATE LIMITED

Founded  •   •  employees  • 
React.js
Javascript
Python
LAMP Stack
MongoDB
NodeJS (Node.js)
Ruby on Rails (ROR)
icon
Chennai
icon
1 - 3 yrs
icon
₹3L - ₹6L / yr
At Daddyswallet, we’re using today’s technology to bring significant disruptive innovation to the financial industry. We focus on improving the lives of consumers by delivering simple, honest and transparent financial products.Looking for Fullstack developer having skills mainly in React native,react js.python.node js.
Read more
Job posted by
Pruthiraj Rath
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at SenecaGlobal?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort