Data Engineer

at REConnect Energy

DP
Posted by Bhavya Das
icon
Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹7L - ₹10L / yr
icon
Full time
Skills
Python
pandas
NumPy
ETL
Data Structures
luigi
Renewable Energy
Weather
Linux/Unix
Data integration
Internet of Things (IOT)
PySpark
SQL
NOSQL Databases
C++
C
Informatica
Data Warehouse (DWH)
Airflow

Work at the intersection of Energy, Weather & Climate Sciences and Artificial Intelligence.

Responsibilities:

  • Manage all real-time and batch ETL pipelines with complete ownership
  • Develop systems for integration, storage and accessibility of multiple data streams from SCADA, IoT devices, Satellite Imaging, Weather Simulation Outputs, etc.
  • Support team members on product development and mentor junior team members

Expectations:

  • Ability to work on broad objectives and move from vision to business requirements to technical solutions
  • Willingness to assume ownership of effort and outcomes
  • High levels of integrity and transparency

Requirements:

  • Strong analytical and data driven approach to problem solving
  • Proficiency in python programming and working with numerical and/or imaging data
  • Experience working on LINUX environments
  • Industry experience in building and maintaining ETL pipelines
Read more

About REConnect Energy

Founded
2010
Type
Products & Services
Size
100-1000 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Codemonk

Founded 2018  •  Products & Services  •  20-100 employees  •  Profitable
ETL
Informatica
Data Warehouse (DWH)
SQL
Amazon Web Services (AWS)
Python
Java
Airflow
icon
Remote only
icon
3 - 5 yrs
icon
₹11L - ₹15L / yr

Codemonk is a young and energetic startup that empowers other startups and enterprises by building simple and elegant software solutions. Through our expertise in AI, Blockchain and Enterprise Applications, we have helped brands such as Unilever, IndiaMART, Unacademy, greytHR, Fyle, Skylark Drones etc., to craft world-class products and improve their business. We are churning out fantastic software for our clients located across the globe from our headquarters at Bengaluru.


We are looking for an experienced data engineer to join our team. You will use various methods to transform raw data into useful data systems. For example, you’ll create algorithms and conduct statistical analysis. Overall, you’ll strive for efficiency by aligning data systems with business goals.

To succeed in this data engineering position, you should have strong analytical skills and the ability to combine data from different sources. Data engineer skills also include familiarity with several programming languages and knowledge of learning machine methods.

Responsibilities:
  • Analyze and organize raw data
  • Build data systems and pipelines
  • Conduct complex data analysis and report on results
  • Combine raw information from different sources
  • Explore ways to enhance data quality and reliability
  • Collaborate with data scientists and architects on several projects

Requirements and skills:
  • Previous experience as a data engineer or in a similar role
  • Technical expertise with data models, data mining, and segmentation techniques
  • Knowledge of programming languages (e.g. Java and Python)
  • Hands-on experience with AWS S3 (data Lakes)
  • Building ETL pipelines using Airflow or any similar tools
  • Hands-on experience with SQL database design
  • Great numerical and analytical skills
  • Having knowledge of big data technologies such as Hadoop or spark is a plus.
Read more
Job posted by
Susan Preetham

MLOps Engineer

at Synapsica Healthcare

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Python
CI/CD
DVCS
Machine Learning (ML)
Kubernetes
Amazon Web Services (AWS)
AWS CloudFormation
Docker
Airflow
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹12L - ₹20L / yr

Introduction

Synapsica is a series-A funded HealthTech startup founded by alumni from IIT Kharagpur, AIIMS New Delhi, and IIM Ahmedabad. We believe healthcare needs to be transparent and objective while being affordable. Every patient has the right to know exactly what is happening in their bodies and they don't have to rely on cryptic 2 liners given to them as a diagnosis. 

Towards this aim, we are building an artificial intelligence enabled cloud based platform to analyse medical images and create v2.0 of advanced radiology reporting.  We are backed by IvyCap, Endia Partners, YCombinator and other investors from India, US, and Japan. We are proud to have GE and The Spinal Kinetics as our partners. Here’s a small sample of what we’re building: https://www.youtube.com/watch?v=FR6a94Tqqls 


Your Roles and Responsibilities

We are looking for an experienced MLOps Engineer to join our engineering team and help us create dynamic software applications for our clients. In this role, you will be a key member of a team in decision making, implementations, development and advancement of ML operations of the core AI platform.

 

 

Roles and Responsibilities:

  • Work closely with a cross functional team to serve business goals and objectives.
  • Develop, Implement and Manage MLOps in cloud infrastructure for data preparation,deployment, monitoring and retraining models
  • Design and build application containerisation and orchestrate with Docker and Kubernetes in AWS platform. 
  • Build and maintain code, tools, packages in cloud

Requirements:

  • At Least 2+ years of experience in Data engineering 
  • At Least 3+ yr experience in Python with familiarity in popular ML libraries.
  • At Least 2+ years experience in model serving and pipelines
  • Working knowledge of containers like kubernetes , dockers, in AWS
  • Design distributed systems deployment at scale
  • Hands-on experience in coding and scripting
  • Ability to write effective scalable and modular code.
  • Familiarity with Git workflows, CI CD and NoSQL Mongodb
  • Familiarity with Airflow, DVC and MLflow is a plus
Read more
Job posted by
Human Resources

Senior Backend Engineer

at a secure data and intelligence sharing platform for Enterprises. We believe data security and privacy are paramount for AI and Machine Learning to truly evolve and embed into the world

Agency job
via HyrHub
Python
Data Structures
RESTful APIs
Design patterns
Django
Apache Kafka
pandas
TensorFlow
RabbitMQ
Amazon Web Services (AWS)
Machine Learning (ML)
DevOps
airflow
icon
Bengaluru (Bangalore)
icon
2 - 4 yrs
icon
₹13L - ₹25L / yr
As part of early stage sta
Expectations
Good experience with writing quality and mature Python code. Familiar with Python
design patterns. OOP , refactoring patterns, writing async tasks and heavy
background tasks.
Understand auth n/z, ideally worked on authorization/authentication mechanism in
python. Familiarity with Auth0 is preferred.
Understand how to secure API endpoints.
Familiar with AWS concepts on -> EC2, VPC, RDS, and IAM. (Or any cloud
equivalent)
Backend Engineer @Eder Labs 3
Have basic DevOps experience and engineering and supporting services in modern
containerized cloud stack.
Experience and understanding of docker an docker-compose.
Responsibilites
Own backend design, architecture, implementation and delivery of features and
modules.
Take ownership of the Database. Write migrations, maintain, and manage
Database. (Postgres, MongoDB.)
Collaborate with a generalist team to develop, test and launch new features. Be a
generalist and find ways and functions in to bring up your team, product and
eventually the business.
Refactoring when needed, and keep hunting for new tools that can help us as a
business (not just the engineering team)
Develop Data Pipelines, from data sourcing, wrangling (cleaning), transformations,
to eventual use
Develop MLOps systems, to take in data, analyze it, pass it through any models,
and process results. DevOps for Machine Learning.
Follow modern git oriented dev workflows, versioning, CI/CD automation and
testing.
Ideal Candidate will have :
2 years of full time experience working as a data infrastructure / core backend
engineer in a team environment.
Understanding of Machine Learning technologies, frameworks and paradigms
involved there.
Backend Engineer @Eder Labs 4
Experience with the following tools:
Fast API / Django
Airflow
Kafka / RabbitMQ
Tensorflow / Pandas / Jupyter Notebook
pytest / asyncio
Experience setting up and managing ELK stack
In depth understanding of database systems, in terms of scaling compute efficiently.
Good understanding of data streaming services, and the involved networking.
Read more
Job posted by
Shwetha Naik

Data Scientist

at Gormalone LLP

Founded 2017  •  Products & Services  •  20-100 employees  •  Bootstrapped
Data Science
Machine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
Data Analytics
EDA
Python
Statistical Analysis
ETL
Artificial Intelligence (AI)
TensorFlow
Deep Learning
Artificial Neural Network (ANN)
DNN
Long short-term memory (LSTM)
Keras
PyTorch
Model Building
Airflow
Mlops
CNN
RNN
YOLO
icon
Bengaluru (Bangalore)
icon
2 - 5 yrs
icon
₹5L - ₹20L / yr

DATA SCIENTIST-MACHINE LEARNING                           

GormalOne LLP. Mumbai IN

 

Job Description

GormalOne is a social impact Agri tech enterprise focused on farmer-centric projects. Our vision is to make farming highly profitable for the smallest farmer, thereby ensuring India's “Nutrition security”. Our mission is driven by the use of advanced technology. Our technology will be highly user-friendly, for the majority of farmers, who are digitally naive. We are looking for people, who are keen to use their skills to transform farmers' lives. You will join a highly energized and competent team that is working on advanced global technologies such as OCR, facial recognition, and AI-led disease prediction amongst others.

 

GormalOne is looking for a machine learning engineer to join. This collaborative yet dynamic, role is suited for candidates who enjoy the challenge of building, testing, and deploying end-to-end ML pipelines and incorporating ML Ops best practices across different technology stacks supporting a variety of use cases. We seek candidates who are curious not only about furthering their own knowledge of ML Ops best practices through hands-on experience but can simultaneously help uplift the knowledge of their colleagues.

 

Location: Bangalore

 

Roles & Responsibilities

  • Individual contributor
  • Developing and maintaining an end-to-end data science project
  • Deploying scalable applications on different platform
  • Ability to analyze and enhance the efficiency of existing products

 

What are we looking for?

  • 3 to 5 Years of experience as a Data Scientist
  • Skilled in Data Analysis, EDA, Model Building, and Analysis.
  • Basic coding skills in Python
  • Decent knowledge of Statistics
  • Creating pipelines for ETL and ML models.
  • Experience in the operationalization of ML models
  • Good exposure to Deep Learning, ANN, DNN, CNN, RNN, and LSTM.
  • Hands-on experience in Keras, PyTorch or Tensorflow

 

 

Basic Qualifications

  • Tech/BE in Computer Science or Information Technology
  • Certification in AI, ML, or Data Science is preferred.
  • Master/Ph.D. in a relevant field is preferred.

 

 

Preferred Requirements

  • Exp in tools and packages like Tensorflow, MLFlow, Airflow
  • Exp in object detection techniques like YOLO
  • Exposure to cloud technologies
  • Operationalization of ML models
  • Good understanding and exposure to MLOps

 

 

Kindly note: Salary shall be commensurate with qualifications and experience

 

 

 

 

Read more
Job posted by
Dhwani Rambhia

Senior Data Warehouse Engineer

at Tide

Founded 2015  •  Product  •  500-1000 employees  •  Raised funding
Snow flake schema
Data Warehouse (DWH)
Informatica
ETL
Data modeling
Amazon Web Services (AWS)
Looker
Airflow
icon
Remote only
icon
3 - 7 yrs
icon
₹30L - ₹32L / yr

About You

As a Senior Data Engineer part of the data team, you will be responsible for running the data systems and services that monitor and report on the end to end Data infrastructure. We are heavily dependent on Snowflake, Airflow, Fivetran, Looker for our business intelligence and embrace AWS as a key partner across our engineering teams. You will report directly in the Head of Data Engineering and work closely with our ML Engineering and Data science Team.

Some of the things you’ll be doing: 

  • Integration of additional data sources into our Snowflake Data Warehouse using Fivetran or custom code
  • Building infrastructure that helps our analysts to move faster, such as adding tests to our CI/CD systems
  • Designing, developing, and implementing scalable, automated processes for data extraction, processing, and analysis
  • Maintaining an accurate log of the technical documentation for the warehouse
  • Troubleshooting and resolving technical issues as they arise
  • Ensuring all servers and applications are patched and upgraded in a timely manner
  • Looking for ways of improving both what and how services are delivered by the department
  • Building data loading services for the purpose of importing data from numerous, disparate data sources, inclusive of APIs, logs, relational, and non-relational databases
  • Working with the BI Developer to ensure that all data feeds are optimised and available at the required times. This can include Change Capture, Change Data Control and other “delta loading” approaches
  • Discovering, transforming, testing, deploying and documenting data sources
  • Applying, help defining, and championing data warehouse governance: data quality, testing, coding best practises, and peer review

What you’ll get in return:

  • Competitive Salary
  • Family & Self Health Insurance
  • Life & Accidental Insurance
  • 25 days annual leaves
  • We invest in your development with professional L&D budget (fixed amount of 40,000 per year)
  • Flexible working options
  • Share Options

You’ll be a great fit if:

  • You have 4+ years of experience in Data Engineering
  • You have extensive development in Building ELT pipelines (Snowflake added advantage )
  • You have experience in building data solutions, both batch processes and streaming applications
  • You have extensive experience in designing , architecting and implementing best Data Engineering practices 
  • You have good experience in Data Modelling 
  • You have extensive experience in writing SQL statements and performance tuning them
  • You have experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets
  • You have experience architecting analytical databases
  • You have experience working in a data engineering or data warehousing team
  • You have high development standards, especially for code quality, code reviews, unit testing, continuous integration and deployment
  • You have strong technical documentation skills and the ability to be clear and precise with business users
  • You have business-level of English and good communication skills
  • You have knowledge of various systems across the AWS platform and the role they play e.g. Lambda, DynamoDB, CloudFormation, Glue 
  • You have experience with Git and Docker
  • You have experience with with Snowflake, dbt, Apache Airflow, Python, Fivetran, AWS, git and Looker

 Who are Tide?

We’re the UK’s leading provider of smart current accounts for sole traders and small companies. We’re also on a mission to save business owners time and money on their banking and finance admin so they can get back to doing what they love - for too long, these customers have been under-served by the big banks.

Our offices are in London, UK, Sofia, Bulgaria and Hyderabad, India, where our teams are dedicated to our small business members, revolutionising business banking for SMEs. We are also the leading provider of UK SME business accounts and one of the fastest-growing fintechs in the UK.

We’re scaling at speed with a focus on hiring talented individuals with a growth mindset and ownership mentality, who are able to juggle multiple and sometimes changing priorities. Our values show our commitment to working as one team, working collaboratively to take action and deliver results. Member first, we are passionate about our members and put them first. We are data-driven, we make decisions, creating insight using data.

We’re also one of LinkedIn’s top 10 hottest UK companies to work for.

Here’s what we think about diversity and inclusion…

We build our services for all types of small business owners. We aim to be as diverse as our members so we hire people from a variety of backgrounds. We’re proud that our diversity not only reflects our multicultural society but that this breadth of experience makes us awesome at solving problems. Everyone here has a voice and you’ll be able to make a difference. If you share our values and want to help small businesses, you’ll make an amazing Tidean.

A note on the future of work at Tide:

Tide’s offices are beginning to open for Tideans to return on a voluntary basis. Timelines for reopening will be unique for each region and will be based on region-specific guidelines. The health and well-being of Tideans and candidates is our primary concern, therefore, for the foreseeable future, we have transitioned all interviews and onboarding to be conducted via Zoom.

Once offices are fully open, Tideans will be able to choose to work from the office or remotely, with the requirement that they visit the office or participate in face-to-face team activities several times per month.

Read more
Job posted by
Anushka Jain

Senior Data Engineer

at Waterdip Labs

Founded 2021  •  Products & Services  •  0-20 employees  •  Profitable
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
Apache Spark
SQL
Amazon Redshift
Apache Kafka
Amazon Web Services (AWS)
Git
CI/CD
Apache airflow
icon
Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹15L - ₹30L / yr
About The Company
 
Waterdip Labs is a deep tech company founded in 2021. We are building an Open-Source Observability platform for AI. Our platform will help data engineers and data scientists to observe data and ml model performance in production.
Apart from the product, we are helping a few of our clients to build data and ML products.
Join us to help building the India’s 1st Open Source MLOps product.

About The Founders
Both founders are 2nd-timend time founders. Their 1st venture Inviz AI Solutions (https://www.inviz.ai) is a bootstrapped venture and became a prominent software service provider with several fortune 500 clients and a team of 100+ engineers.
Subhankar is an IIT Kharagpur alum with 10+ years of the experience software industry. He built some of the world-class Data and ML systems for companies like Target, Tesco, Falabella, and SAAS products for multiple India and USA-based start-ups. https://www.linkedin.com/in/wsubhankarb/
Gaurav is an IIT Dhanbad alum. He started his career in Tesco Labs as a data scientist for retail applications and gradually moved on to a more techno-functional role in major retail companies like Tesco, Falabella, Fazil, Lowes, and Aditya Birla group. https://www.linkedin.com/in/kumargaurav2596/
They started Waterdip with a vision to build world-class open-source software out of India.

About the Job
The client is a publicly owned, global technology company with 48 offices in 17 countries. It provides software design and delivery, tools and consulting services. The client is closely associated with the movement for agile software development and has contributed to the content of opensource products.

Job responsibilities
• Analyze, organize raw data, build data pipeline and test scripts
• Understand the business process, job orchestration from the SSIS packages
• Explore ways to enhance data quality, optimize and improve reliability
• Developing data pipelines to process event-streaming data
• Implementation of data standards, policies and data security
• Develop CI/CD pipeline to deploy and maintain data pipelines
 
Job qualifications
5-7 years of experience
 
Technical skills
• You should have experience with Python and anymore development languages
• Have working experience in SQL
• You have exposure to Data processing tooling and Data visualization
• Comfortability with Azure proficiency, CI/CD and Git
• Have understanding on Data quality, data pipelines, data storage, distributed systems architecture, data security, data privacy & ethics, data modeling, data infrastructure & operations and Business Intelligence
• Bonus points if you have prior working knowledge in creating data products and/or prior experience on Azure Data Catalog, Azure Event Hub
• Workflow management platform like Airflow.
• Large scale data processing tool like Apache Spark.
• Distributed messaging platform like Apache Kafka.

Professional skills
• You enjoy influencing others and always advocate for technical excellence while being open to change when needed
• Presence in the external tech community: you willingly share your expertise with others via speaking engagements, contributions to open source, blogs and more
• You're resilient in ambiguous situations and can approach challenges from multiple perspectives
 
 
 
 
 
 
 
 
 
 
 
 
Read more
Job posted by
Subhankar Biswas

Backend Data Engineer

at India's best Short Video App

Agency job
via wrackle
Data engineering
Big Data
Spark
Apache Kafka
Apache Hive
Data engineer
Elastic Search
MongoDB
Python
Apache Storm
Druid Database
Apache HBase
Cassandra
DynamoDB
Memcached
Proxies
HDFS
Pig
Scribe
Apache ZooKeeper
Agile/Scrum
Roadmaps
DevOps
Software Testing (QA)
Data Warehouse (DWH)
flink
aws kinesis
presto
airflow
caches
data pipeline
icon
Bengaluru (Bangalore)
icon
4 - 12 yrs
icon
₹25L - ₹50L / yr
What Makes You a Great Fit for The Role?

You’re awesome at and will be responsible for
 
Extensive programming experience with cross-platform development of one of the following Java/SpringBoot, Javascript/Node.js, Express.js or Python
3-4 years of experience in big data analytics technologies like Storm, Spark/Spark streaming, Flink, AWS Kinesis, Kafka streaming, Hive, Druid, Presto, Elasticsearch, Airflow, etc.
3-4 years of experience in building high performance RPC services using different high performance paradigms: multi-threading, multi-processing, asynchronous programming (nonblocking IO), reactive programming,
3-4 years of experience working high throughput low latency databases and cache layers like MongoDB, Hbase, Cassandra, DynamoDB,, Elasticache ( Redis + Memcache )
Experience with designing and building high scale app backends and micro-services leveraging cloud native services on AWS like proxies, caches, CDNs, messaging systems, Serverless compute(e.g. lambda), monitoring and telemetry.
Strong understanding of distributed systems fundamentals around scalability, elasticity, availability, fault-tolerance.
Experience in analysing and improving the efficiency, scalability, and stability of distributed systems and backend micro services.
5-7 years of strong design/development experience in building massively large scale, high throughput low latency distributed internet systems and products.
Good experience in working with Hadoop and Big Data technologies like HDFS, Pig, Hive, Storm, HBase, Scribe, Zookeeper and NoSQL systems etc.
Agile methodologies, Sprint management, Roadmap, Mentoring, Documenting, Software architecture.
Liaison with Product Management, DevOps, QA, Client and other teams
 
Your Experience Across The Years in the Roles You’ve Played
 
Have total or more 5 - 7 years of experience with 2-3 years in a startup.
Have B.Tech or M.Tech or equivalent academic qualification from premier institute.
Experience in Product companies working on Internet-scale applications is preferred
Thoroughly aware of cloud computing infrastructure on AWS leveraging cloud native service and infrastructure services to design solutions.
Follow Cloud Native Computing Foundation leveraging mature open source projects including understanding of containerisation/Kubernetes.
 
You are passionate about learning or growing your expertise in some or all of the following
Data Pipelines
Data Warehousing
Statistics
Metrics Development
 
We Value Engineers Who Are
 
Customer-focused: We believe that doing what’s right for the creator is ultimately what will drive our business forward.
Obsessed with Quality: Your Production code just works & scales linearly
Team players. You believe that more can be achieved together. You listen to feedback and also provide supportive feedback to help others grow/improve.
Pragmatic: We do things quickly to learn what our creators desire. You know when it’s appropriate to take shortcuts that don’t sacrifice quality or maintainability.
Owners: Engineers at Chingari know how to positively impact the business.
Read more
Job posted by
Naveen Taalanki

Data Engineer

at Reddoorz

Founded 2015  •  Product  •  1000-5000 employees  •  Profitable
Python
Amazon Web Services (AWS)
Amazon Redshift
pandas
Airflow
icon
Noida
icon
2 - 6 yrs
icon
₹8L - ₹15L / yr
Should have hands-on: Python, AWS, redshift/snowflake/bigquery/ any MPP database. Should have knowledge of data pipeline and orchestration.
Read more
Job posted by
Atif Imam

Data Engineer

at MNC Company - Product Based

Agency job
via Bharat Headhunters
Data Warehouse (DWH)
Informatica
ETL
Python
Google Cloud Platform (GCP)
SQL
AIrflow
icon
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
icon
5 - 9 yrs
icon
₹10L - ₹15L / yr

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
Job posted by
Ranjini C. N

Data Engineer

at Data & Cloud Technology serviced based company.

Agency job
via Multi Recruit
Apache Spark
HiveQL
Amazon Web Services (AWS)
Data engineering
JSON
XML
Apache Airflow
icon
Chennai, Coimbatore, Madurai
icon
5 - 10 yrs
icon
₹12L - ₹19L / yr
  • Must have the experience of leading teams and drive customer interactions
  • Must have multiple successful deployments user stories
  • Extensive hands on experience in Apache Spark along with HiveQL
  • Sound knowledge in Amazon Web Services or any other Cloud environment.
  • Experienced in data flow orchestration using Apache Airflow
  • JSON, XML, CSV, Parquet file formats with snappy compression.
  • File movements between HDFS and AWS S3
  • Experience in shell scripting and scripting to automate report generation and migration of reports to AWS S3
  • Worked in building a data pipeline using Pandas and Flask FrameworkGood Familiarity with Anaconda and Jupyternotebook
Read more
Job posted by
Ragul Ragul
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at REConnect Energy?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort