Cutshort logo
5 - 9 yrs
₹10L - ₹15L / yr
Bengaluru (Bangalore), Chennai, Hyderabad, Pune, Delhi, Gurugram, Noida, Ghaziabad, Faridabad
Skills
ETL
Informatica
Data Warehouse (DWH)
skill iconPython
Google Cloud Platform (GCP)
SQL
AIrflow

Job Responsibilities

  • Design, build & test ETL processes using Python & SQL for the corporate data warehouse
  • Inform, influence, support, and execute our product decisions
  • Maintain advertising data integrity by working closely with R&D to organize and store data in a format that provides accurate data and allows the business to quickly identify issues.
  • Evaluate and prototype new technologies in the area of data processing
  • Think quickly, communicate clearly and work collaboratively with product, data, engineering, QA and operations teams
  • High energy level, strong team player and good work ethic
  • Data analysis, understanding of business requirements and translation into logical pipelines & processes
  • Identification, analysis & resolution of production & development bugs
  • Support the release process including completing & reviewing documentation
  • Configure data mappings & transformations to orchestrate data integration & validation
  • Provide subject matter expertise
  • Document solutions, tools & processes
  • Create & support test plans with hands-on testing
  • Peer reviews of work developed by other data engineers within the team
  • Establish good working relationships & communication channels with relevant departments

 

Skills and Qualifications we look for

  • University degree 2.1 or higher (or equivalent) in a relevant subject. Master’s degree in any data subject will be a strong advantage.
  • 4 - 6 years experience with data engineering.
  • Strong coding ability and software development experience in Python.
  • Strong hands-on experience with SQL and Data Processing.
  • Google cloud platform (Cloud composer, Dataflow, Cloud function, Bigquery, Cloud storage, dataproc)
  • Good working experience in any one of the ETL tools (Airflow would be preferable).
  • Should possess strong analytical and problem solving skills.
  • Good to have skills - Apache pyspark, CircleCI, Terraform
  • Motivated, self-directed, able to work with ambiguity and interested in emerging technologies, agile and collaborative processes.
  • Understanding & experience of agile / scrum delivery methodology

 

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About MNC Company - Product Based

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Innovative Fintech Startup
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
5 - 12 yrs
₹28L - ₹55L / yr
skill iconData Science
Data Scientist
skill iconMachine Learning (ML)
skill iconPython
Statistical Modeling
+2 more
Lead a team of data scientists from top tier schools and collaborate with Founders and business
heads to solve complex business problems
- Develop statistical, and machine learning-based models/pipelines/methods to improve business
processes and engagements
- Conduct sophisticated data mining analyses of large volumes of data and build data science
models, as required, as part of the credit and risk underwriting solutions; customer engagement and
retention; new business initiatives; business process improvements
- Translate data mining results into a clear business-focused deliverable for decisionmakers
- Working with Application Developers on integrating machine learning algorithms and data mining
models into operational systems so it could lead to automation, productivity increase, and time
savings
- Provide the technical direction required to resolve complex issues to ensure the on-time delivery of
solutions that meet the business team’s expectations. May need to develop new methods to apply
to situations
- Knowledge of how to leverage statistical models in algorithms is a must
- Experience in multivariate analysis; identifying how several parameters can affect
retention/behaviour of the customer and identifying actions at different points of the customer lifecycle

Extensive experience coding in Python and having mentored teams to learn the same
- Great understanding of the data science landscape and what tools to leverage for different
problems
- A great structured thinker that could bring structure to any data science problem quickly
- Ability to visualize data stories and adept in data visualization tools and present insights as cohesive
stories to senior leadership
- Excellent capability to organize large data sets collected from many sources (web APIs and internal
databases) to get actionable insights
- Initiate data science programs in the team and collaborate across other data science teams to build
a knowledge database
Read more
Healthtech Startup
Agency job
via Qrata by Rayal Rajan
Bengaluru (Bangalore)
6 - 10 yrs
₹20L - ₹30L / yr
Google Cloud Platform (GCP)
bigquery

Description: 

As a Data Engineering Lead at Company, you will be at the forefront of shaping and managing our data infrastructure with a primary focus on Google Cloud Platform (GCP). You will lead a team of data engineers to design, develop, and maintain our data pipelines, ensuring data quality, scalability, and availability for critical business insights. 


Key Responsibilities: 

1. Team Leadership: 

a. Lead and mentor a team of data engineers, providing guidance, coaching, and performance management. 

b. Foster a culture of innovation, collaboration, and continuous learning within the team. 

2. Data Pipeline Development (Google Cloud Focus): 

a. Design, develop, and maintain scalable data pipelines on Google Cloud Platform (GCP) using services such as BigQuery, Dataflow, and Dataprep.

b. Implement best practices for data extraction, transformation, and loading (ETL) processes on GCP. 

3. Data Architecture and Optimization: 

a. Define and enforce data architecture standards, ensuring data is structured and organized efficiently. 

b. Optimize data storage, processing, and retrieval for maximum 

performance and cost-effectiveness on GCP. 

4. Data Governance and Quality: 

a. Establish data governance frameworks and policies to maintain data quality, consistency, and compliance with regulatory requirements. b. Implement data monitoring and alerting systems to proactively address data quality issues. 

5. Cross-functional Collaboration: 

a. Collaborate with data scientists, analysts, and other cross-functional teams to understand data requirements and deliver data solutions that drive business insights. 

b. Participate in discussions regarding data strategy and provide technical expertise. 

6. Documentation and Best Practices: 

a. Create and maintain documentation for data engineering processes, standards, and best practices. 

b. Stay up-to-date with industry trends and emerging technologies, making recommendations for improvements as needed. 


Qualifications 

● Bachelor's or Master's degree in Computer Science, Data Engineering, or related field. 

● 5+ years of experience in data engineering, with a strong emphasis on Google Cloud Platform. 

● Proficiency in Google Cloud services, including BigQuery, Dataflow, Dataprep, and Cloud Storage. 

● Experience with data modeling, ETL processes, and data integration. ● Strong programming skills in languages like Python or Java. 

● Excellent problem-solving and communication skills. 

● Leadership experience and the ability to manage and mentor a team.


Read more
Hyderabad
7 - 15 yrs
₹7L - ₹45L / yr
Analytics
skill iconKubernetes
skill iconPython
  • 5+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka Streams, Py Spark, and streaming databases like druid or equivalent like Hive
  • Strong industry expertise with containerization technologies including kubernetes (EKS/AKS), Kubeflow
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • 5+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Experience with scripting languages. Python experience highly desirable. Experience in API development using Swagger
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins

Responsibilities

  • Architect, Design and Implement Large scale data processing pipelines using Kafka Streams, PySpark, Fluentd and Druid
  • Create custom Operators for Kubernetes, Kubeflow
  • Develop data ingestion processes and ETLs
  • Assist in dev ops operations
  • Design and Implement APIs
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
Marktine
at Marktine
1 recruiter
Vishal Sharma
Posted by Vishal Sharma
Remote, Bengaluru (Bangalore)
3 - 7 yrs
₹10L - ₹24L / yr
skill iconData Science
skill iconR Programming
skill iconPython
SQL
skill iconMachine Learning (ML)
+1 more

Responsibilities:

  • Design and develop strong analytics system and predictive models
  • Managing a team of data scientists, machine learning engineers, and big data specialists
  • Identify valuable data sources and automate data collection processes
  • Undertake pre-processing of structured and unstructured data
  • Analyze large amounts of information to discover trends and patterns
  • Build predictive models and machine-learning algorithms
  • Combine models through ensemble modeling
  • Present information using data visualization techniques
  • Propose solutions and strategies to business challenges
  • Collaborate with engineering and product development teams

Requirements:

  • Proven experience as a seasoned Data Scientist
  • Good Experience in data mining processes
  • Understanding of machine learning and Knowledge of operations research is a value addition
  • Strong understanding and experience in R, SQL, and Python; Knowledge base with Scala, Java, or C++ is an asset
  • Experience using business intelligence tools (e. g. Tableau) and data frameworks (e. g. Hadoop)
  • Strong math skills (e. g. statistics, algebra)
  • Problem-solving aptitude
  • Excellent communication and presentation skills
  • Experience in Natural Language Processing (NLP)
  • Strong competitive coding skills
  • BSc/BA in Computer Science, Engineering or relevant field; graduate degree in Data Science or other quantitative field is preferred
Read more
El Corte Ingls
Saradhi Reddy
Posted by Saradhi Reddy
Hyderabad
3 - 7 yrs
₹10L - ₹25L / yr
skill iconData Science
skill iconR Programming
skill iconPython
View the profiles of professionals named Vijaya More on LinkedIn. There are 20+ professionals named Vijaya More, who use LinkedIn to exchange information, ideas, and opportunities.
Read more
DemandMatrix
at DemandMatrix
4 recruiters
Harwinder Singh
Posted by Harwinder Singh
Remote only
9 - 12 yrs
₹25L - ₹30L / yr
Big Data
PySpark
Apache Hadoop
Spark
skill iconPython
+3 more

Only a solid grounding in computer engineering, Unix, data structures and algorithms would enable you to meet this challenge.

7+ years of experience architecting, developing, releasing, and maintaining large-scale big data platforms on AWS or GCP

Understanding of how Big Data tech and NoSQL stores like MongoDB, HBase/HDFS, ElasticSearch synergize to power applications in analytics, AI and knowledge graphs

Understandingof how data processing models, data location patterns, disk IO, network IO, shuffling affect large scale text processing - feature extraction, searching etc

Expertise with a variety of data processing systems, including streaming, event, and batch (Spark,  Hadoop/MapReduce)

5+ years proficiency in configuring and deploying applications on Linux-based systems

5+ years of experience Spark - especially Pyspark for transforming large non-structured text data, creating highly optimized pipelines

Experience with RDBMS, ETL techniques and frameworks (Sqoop, Flume) and big data querying tools (Pig, Hive)

Stickler of world class best practices, uncompromising on the quality of engineering, understand standards and reference architectures and deep in Unix philosophy with appreciation of big data design patterns, orthogonal code design and functional computation models
Read more
LendingKart
at LendingKart
5 recruiters
Mohammed Nayeem
Posted by Mohammed Nayeem
Bengaluru (Bangalore), Ahmedabad
2 - 5 yrs
₹2L - ₹13L / yr
skill iconPython
skill iconData Science
SQL
Roles and Responsibilities:
 Mining large volumes of credit behavior data to generate insights around product holdings and monetization opportunities for cross sell
 Use data science to size opportunity and product potential for launch of any new product/pilots
 Build propensity models using heuristics and campaign performance to maximize efficiency.
 Conduct portfolio analysis and establish key metrics for cross sell partnership

Desired profile/Skills:
 2-5 years of experience with a degree in any quantitative discipline such as Engineering, Computer Science, Economics, Statistics or Mathematics
 Excellent problem solving and comprehensive analytical skills – ability to structure ambiguous problem statements, perform detailed analysis and derive crisp insights.
 Solid experience in using python and SQL
 Prior work experience in a financial services space would be highly valued

Location: Bangalore/ Ahmedabad
Read more
Precily Private Limited
at Precily Private Limited
5 recruiters
Bharath Rao
Posted by Bharath Rao
NCR (Delhi | Gurgaon | Noida)
1 - 3 yrs
₹3L - ₹9L / yr
skill iconData Science
Artificial Neural Network (ANN)
Artificial Intelligence (AI)
skill iconMachine Learning (ML)
skill iconPython
+3 more
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Read more
Thrymr Software
at Thrymr Software
3 recruiters
Sharmistha Debnath
Posted by Sharmistha Debnath
Hyderabad
0 - 2 yrs
₹4L - ₹5L / yr
skill iconR Programming
skill iconPython
skill iconData Science
Company Profile: Thrymr Software is an outsourced product development startup. Our primary development center is in Hyderabad, India with a team of about 100+ members across various technical roles. Thrymr is also in Singapore, Hamburg (Germany) and Amsterdam (Netherlands). Thrymr works with companies to take complete ownership of building their end to end products be it web or mobile applications or advanced analytics including GIS, machine learning or computer vision. http://thrymr.net Job Location Hyderabad, Financial District Job Description: As a Data Scientist, you will evaluate and improve products. You will collaborate with a multidisciplinary team of engineers and analysts on a wide range of problems. As a Data Scientist your responsibility will be (but not limited to) Design, benchmark and tune machine-learning algorithms,Painlessly and securely manipulate large and complex relational data sets, Assemble complex machine-learning strategies, Build new predictive apps and services Responsibilities: 1. Work with large, complex datasets. Solve difficult, non-routine analysis problems, applying advanced analytical methods as needed. Conduct end-to-end analysis that includes data gathering and requirements specification, processing, analysis, ongoing deliverables, and presentations. 2. Build and prototype analysis pipelines iteratively to provide insights at scale. Develop a comprehensive understanding of Google data ​structures and metrics, advocating for changes were needed for both products development and sales activity. 3. Interact cross-functionally with a wide variety of people and teams. Work closely with engineers to identify opportunities for, design, and assess improvements to various products. 4. Make business recommendations (e.g. cost-benefit, forecasting, experiment analysis) with effective presentations of findings at multiple levels of stakeholders through visual displays of quantitative information. 5. Research and develop analysis, forecasting, and optimization methods to improve the quality of uuser-facingproducts; example application areas include ads quality, search quality, end-user behavioral modeling, and live experiments. 6. Apart from the core data science work (subjected to projects and availability), you may also be required to contribute to regular software development work such as web, backend and mobile applications development. Our Expectations 1. You should have at least B.Tech/B.Sc degree or equivalent practical experience (e.g., statistics, operations research, bioinformatics, economics, computational biology, computer science, mathematics, physics, electrical engineering, industrial engineering). 2. You should have practical experience of working experience with statistical packages (e.g., R, Python, NumPy ) ​and databases (e.g., SQL). 3. You should have experience articulating business questions and using mathematical techniques to arrive at an answer using available data​. Experience translating analysis results into business recommendations. 4. You should have strong analytical and research skills 5. You should have good academics 6. You will have to very proactive and submit your daily/weekly reports very diligently 7. You should be comfortable with working exceptionally hard as we are a startup and this is a high-performance work environment. 8. This is NOT a 9 to 5 ​kind of job, you should be able to work long hours. What you can expect 1. High-performance work culture 2. Short-term travel across the globe at very short notices 3. Accelerated learning (you will learn at least thrice as much compared to other companies in similar roles) and become a lot more technical 4. Happy go lucky team with zero politics 5. Zero tolerance for unprofessional behavior and substandard performance 6. Performance-based appraisals that can happen anytime with considerable hikes compared to single-digit annual hikes as the market standard
Read more
Indix
at Indix
1 recruiter
Sri Devi
Posted by Sri Devi
Chennai, Hyderabad
3 - 7 yrs
₹15L - ₹45L / yr
skill iconData Science
skill iconPython
Algorithms
Data Structures
Scikit-Learn
+3 more
Software Engineer – ML at Indix provides an opportunity to design and build systems that crunch large amounts of data everyday What We’re Looking For- 3+ years of experience Ability to propose hypothesis and design experiments in the context of specific problems. Should come from a strong engineering background Good overlap with Indix Data tech stack such as Hadoop, MapReduce, HDFS, Spark, Scalding, Scala/Python/C++ Dedication and diligence in understanding the application domain, collecting/cleaning data and conducting experiments. Creativity in model and algorithm development. An obsession to develop algorithms/models that directly impact business. Master’s/Phd. in Computer Science/Statistics is a plus Job Expectations Experience working in text mining and python libraries like scikit-learn, numpy, etc Collect relevant data from production systems/Use crawling and parsing infrastructure to put together data sets. Survey academic literature and identify potential approaches for exploration. Craft, conduct and analyze experiments to evaluate models/algorithms. Communicate findings and take algorithms/models to production with end to end ownership.
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos