Senior Data Engineer

at Bookr Inc

DP
Posted by Nimish Mehta
icon
Remote, Chennai, Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹35L / yr (ESOP available)
icon
Full time
Skills
Big Data
Hadoop
Spark
Data engineering
Data Warehouse (DWH)
ETL
EMR
Amazon Redshift
PostgreSQL
SQL
Scala
Java
Python
airflow

In this role you'll get.

  • Being part of core team member for data platform, setup platform foundation while adhering all required quality standards and design patterns
  • Write efficient and quality code that can scale
  • Adopt Bookr quality standards, recommend process standards and best practices
  • Research, learn & adapt new technologies to solve problems & improve existing solutions
  • Contribute to engineering excellence backlog
  • Identify performance issues
  • Effective code and design reviews
  • Improve reliability of overall production system by proactively identifying patterns of failure
  • Leading and mentoring junior engineers by example
  • End-to-end ownership of stories (including design, serviceability, performance, failure handling)
  • Strive hard to provide the best experience to anyone using our products
  • Conceptualise innovative and elegant solutions to solve challenging big data problems
  • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products
  • Adhere to company policies, procedures, mission, values, and standards of ethics and integrity

 

On day one we'll expect you to.

  • B. E/B. Tech from a reputed institution
  • Minimum 5 years of software development experience and at least a year experience in leading/guiding people
  • Expert coding skills in Python/PySpark or Java/Scala
  • Deep understanding in Big Data Ecosystem - Hadoop and Spark
  • Must have project experience with Spark
  • Ability to independently troubleshoot Spark jobs
  • Good understanding of distributed systems
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Expert hands on experience with RDBMS
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Ability to work independently as well as working collaboratively in a team

 

Added bonuses you have.

  • Hands on experience with EMR/Glue/Data bricks
  • Hand on experience with Airflow
  • Hands on experience with AWS Big Data ecosystem

 

We are looking for passionate Engineers who are always hungry for challenging problems. We believe in creating opportunistic, yet balanced, work environment for savvy, entrepreneurial tech individuals. We are thriving on remote work with team working across multiple timezones.

 

 

  • Flexible hours & Remote work - We are a results focused bunch, so we encourage you to work whenever and wherever you feel most creative and focused.
  • Unlimited PTOWe want you to feel free to recharge your batteries when you need it!
  • Stock Options - Opportunity to participate in Company stock plan
  • Flat hierarchy - Team leaders at your fingertips
  • BFC(Stands for bureaucracy-free company). We're action oriented and don't bother with dragged-out meetings or pointless admin exercises - we'd rather get our hands dirty!
  • Working along side Leaders - You being part of core team, will give you opportunity to directly work with founding and management team

 

Read more

About Bookr Inc

Enterprise-grade accounting, business intelligence, and forecasting for the mid-market Enable data-driven decisions across your organization with the first concierge accounting, analytics, forecasting, and reporting suite built specifically for the mid-market. Our team of analysts deliver enterp ...
Read more
Founded
2019
Type
Products & Services
Size
20-100 employees
Stage
Raised funding
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineer

at Product based company

Agency job
via Zyvka Global Services
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
Scala
Amazon Web Services (AWS)
ETL
CleverTap
Linux/Unix
icon
Bengaluru (Bangalore)
icon
3 - 12 yrs
icon
₹5L - ₹30L / yr

Responsibilities:

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyse large amounts of information to discover trends and patterns
  • Mine and analyse data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.\

Requirements

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 2-6 years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)
Read more
Job posted by
Ridhima Sharma

Senior Data Engineer

at Velocity.in

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
ETL
Informatica
Data Warehouse (DWH)
Data engineering
Oracle
PostgreSQL
DevOps
Amazon Web Services (AWS)
NodeJS (Node.js)
Ruby on Rails (ROR)
React.js
Python
icon
Bengaluru (Bangalore)
icon
4 - 9 yrs
icon
₹15L - ₹35L / yr

We are an early stage start-up, building new fintech products for small businesses. Founders are IIT-IIM alumni, with prior experience across management consulting, venture capital and fintech startups. We are driven by the vision to empower small business owners with technology and dramatically improve their access to financial services. To start with, we are building a simple, yet powerful solution to address a deep pain point for these owners: cash flow management. Over time, we will also add digital banking and 1-click financing to our suite of offerings.

 

We have developed an MVP which is being tested in the market. We have closed our seed funding from marquee global investors and are now actively building a world class tech team. We are a young, passionate team with a strong grip on this space and are looking to on-board enthusiastic, entrepreneurial individuals to partner with us in this exciting journey. We offer a high degree of autonomy, a collaborative fast-paced work environment and most importantly, a chance to create unparalleled impact using technology.

 

Reach out if you want to get in on the ground floor of something which can turbocharge SME banking in India!

 

Technology stack at Velocity comprises a wide variety of cutting edge technologies like, NodeJS, Ruby on Rails, Reactive Programming,, Kubernetes, AWS, NodeJS, Python, ReactJS, Redux (Saga) Redis, Lambda etc. 

 

Key Responsibilities

  • Responsible for building data and analytical engineering pipelines with standard ELT patterns, implementing data compaction pipelines, data modelling and overseeing overall data quality

  • Work with the Office of the CTO as an active member of our architecture guild

  • Writing pipelines to consume the data from multiple sources

  • Writing a data transformation layer using DBT to transform millions of data into data warehouses.

  • Implement Data warehouse entities with common re-usable data model designs with automation and data quality capabilities

  • Identify downstream implications of data loads/migration (e.g., data quality, regulatory)

 

What To Bring

  • 5+ years of software development experience, a startup experience is a plus.

  • Past experience of working with Airflow and DBT is preferred

  • 5+ years of experience working in any backend programming language. 

  • Strong first-hand experience with data pipelines and relational databases such as Oracle, Postgres, SQL Server or MySQL

  • Experience with DevOps tools (GitHub, Travis CI, and JIRA) and methodologies (Lean, Agile, Scrum, Test Driven Development)

  • Experienced with the formulation of ideas; building proof-of-concept (POC) and converting them to production-ready projects

  • Experience building and deploying applications on on-premise and AWS or Google Cloud cloud-based infrastructure

  • Basic understanding of Kubernetes & docker is a must.

  • Experience in data processing (ETL, ELT) and/or cloud-based platforms

  • Working proficiency and communication skills in verbal and written English.

 

Read more
Job posted by
Newali Hazarika

Big Data Engineer / Lead

at Paytm Payments Bank

Founded 2017  •  Product  •  5000+ employees  •  Profitable
Scala
Amazon Web Services (AWS)
Spark
Python
Hadoop
Big Data
Data engineering
PySpark
ETL
Data Structures
Business Intelligence (BI)
icon
Noida
icon
3 - 12 yrs
icon
₹20L - ₹50L / yr

This position is for Big Data Engineer/Lead  specialized in Hadoop, Spark and AWS Data Engineering technologies with 3 to 12 years of experience.

 

Roles & Responsibility 
For this role, we require someone with strong product design sense. The position requires one to work on complex technical projects and closely work with peers in an innovative and fast-paced environment.

  • Grow our analytics capabilities with faster, more reliable data pipelines, and better tools, handling petabytes of data every day.
  • Brainstorm and create new platforms and migrate the existing ones to AWS, that can help in our quest to make data available to cluster users in all shapes and forms, with low latency and horizontal scalability.
  • Make changes to our data platform, refactoring/redesigning as needed and diagnosing any problems across the entire technical stack.
  • Design and develop a real-time events pipeline for Data ingestion for real-time dash-boarding.
  • Develop complex and efficient functions to transform raw data sources into powerful, reliable components of our data lake.
  • Design & implement new components and various emerging technologies in AWS, and Hadoop Eco System, and successful execution of various projects.
  • Optimize and improve existing features or data processes for performance and stability.
  • Conduct peer design and code reviews.
  • Write unit tests and support continuous integration.
  • Be obsessed about quality and ensure minimal production downtimes.
  • Mentor peers, share information and knowledge and help build a great team.
  • Monitor job performances, file system/disk-space management, cluster & database connectivity, log files, management of backup/security and troubleshooting various user issues.
  • Collaborate with various cross-functional teams: infrastructure, network, database.

 

Must have skills: Python, AWS, Scala, Spark, Hadoop, Big Data Analytics

 

Desired Skills

  • Fluent with data structures, algorithms and design patterns.
  • Strong hands-on experience with Hadoop, MapReduce, Hive, Spark.
  • Excellent programming/debugging skills in Java/Scala.
  • Experience with any scripting language such as Python, Bash etc.
  • Good to have experience of working with No SQL databases like HBase, Cassandra.
  • Experience with BI Tools like AWS QuickSight, Dashboarding and Metrics.
  • Hands on programming experience with multithreaded applications.
  • Good to have experience in Database, SQL, messaging queues like Kafka.
  • Good to have experience in developing streaming applications eg Spark Streaming, Flink, Storm, etc.
  • Good to have experience with AWS and cloud technologies such as S3.
  • Experience with caching architectures like Redis, Memcached etc.
  • Memory optimization and GC tuning.
  • Experience with profiling and performance optimizations.
  • Experience with agile development methodologies and DevOps action.

 

Read more
Job posted by
Saurabh Gupta

Data Analyst

at MedCords

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Data Analytics
Data Analyst
R Language
Python
icon
Kota
icon
0 - 1 yrs
icon
₹1L - ₹2.5L / yr

Required Python ,R

work in handling large-scale data engineering pipelines.
Excellent verbal and written communication skills.
Proficient in PowerPoint or other presentation tools.
Ability to work quickly and accurately on multiple projects.

Read more
Job posted by
kavita jain

ETL IDQ Developer

at Our Client company is into Computer Software. (EC1)

Agency job
via Multi Recruit
ETL
IDQ
icon
Bengaluru (Bangalore)
icon
3 - 8 yrs
icon
₹13L - ₹22L / yr
  • Participate in planning, implementation of solutions, and transformation programs from legacy system to a cloud-based system
  • Work with the team on Analysis, High level and low-level design for solutions using ETL or ELT based solutions and DB services in RDS
  • Work closely with the architect and engineers to design systems that effectively reflect business needs, security requirements, and service level requirements
  • Own deliverables related to design and implementation
  • Own Sprint tasks and drive the team towards the goal while understanding the change and release process defined by the organization.
  • Excellent communication skills, particularly those relating to complex findings and presenting them to ensure audience appeal at various levels of the organization
  • Ability to integrate research and best practices into problem avoidance and continuous improvement
  • Must be able to perform as an effective member in a team-oriented environment, maintain a positive attitude, and achieve desired results while working with minimal supervision


Basic Qualifications:

  • Minimum of 5+ years of technical work experience in the implementation of complex, large scale, enterprise-wide projects including analysis, design, core development, and delivery
  • Minimum of 3+ years of experience with expertise in Informatica ETL, Informatica Power Center, and Informatica Data Quality
  • Experience with Informatica MDM tool is good to have
  • Should be able to understand the scope of the work and ask for clarifications
  • Should have advanced SQL skills. Including complex PL/SQL coding skills
  • Knowledge of Agile is plus
  • Well-versed with SOAP, Webservice, and REST API.
  • Hand on development using Java would be a plus.

 

 

 

Read more
Job posted by
Manjunath Multirecruit

Principal Data Scientist

at Antuit

Founded 2013  •  Product  •  100-500 employees  •  Profitable
Data Science
Machine Learning (ML)
Artificial Intelligence (AI)
Data Scientist
Python
PyTorch
Supply Chain Management (SCM)
Time series
Demand forecasting
MLOPs
C++
Java
TensorFlow
Kubernetes
icon
Bengaluru (Bangalore)
icon
8 - 12 yrs
icon
₹25L - ₹30L / yr

About antuit.ai

 

Antuit.ai is the leader in AI-powered SaaS solutions for Demand Forecasting & Planning, Merchandising and Pricing. We have the industry’s first solution portfolio – powered by Artificial Intelligence and Machine Learning – that can help you digitally transform your Forecasting, Assortment, Pricing, and Personalization solutions. World-class retailers and consumer goods manufacturers leverage antuit.ai solutions, at scale, to drive outsized business results globally with higher sales, margin and sell-through.

 

Antuit.ai’s executives, comprised of industry leaders from McKinsey, Accenture, IBM, and SAS, and our team of Ph.Ds., data scientists, technologists, and domain experts, are passionate about delivering real value to our clients. Antuit.ai is funded by Goldman Sachs and Zodius Capital.

 

The Role:

 

Antuit.ai is interested in hiring a Principal Data Scientist, this person will facilitate standing up standardization and automation ecosystem for ML product delivery, he will also actively participate in managing implementation, design and tuning of product to meet business needs.

 

Responsibilities:

 

Responsibilities includes, but are not limited to the following:

 

  • Manage and provides technical expertise to the delivery team. This includes recommendation of solution alternatives, identification of risks and managing business expectations.
  • Design, build reliable and scalable automated processes for large scale machine learning.
  • Use engineering expertise to help design solutions to novel problems in software development, data engineering, and machine learning. 
  • Collaborate with Business, Technology and Product teams to stand-up MLOps process.
  • Apply your experience in making intelligent, forward-thinking, technical decisions to delivery ML ecosystem, including implementing new standards, architecture design, and workflows tools.
  • Deep dive into complex algorithmic and product issues in production
  • Own metrics and reporting for delivery team. 
  • Set a clear vision for the team members and working cohesively to attain it.
  • Mentor and coach team members


Qualifications and Skills:

 

Requirements

  • Engineering degree in any stream
  • Has at least 7 years of prior experience in building ML driven products/solutions
  • Excellent programming skills in any one of the language C++ or Python or Java.
  • Hands on experience on open source libraries and frameworks- Tensorflow,Pytorch, MLFlow, KubeFlow, etc.
  • Developed and productized large-scale models/algorithms in prior experience
  • Can drive fast prototypes/proof of concept in evaluating various technology, frameworks/performance benchmarks.
  • Familiar with software development practices/pipelines (DevOps- Kubernetes, docker containers, CI/CD tools).
  • Good verbal, written and presentation skills.
  • Ability to learn new skills and technologies.
  • 3+ years working with retail or CPG preferred.
  • Experience in forecasting and optimization problems, particularly in the CPG / Retail industry preferred.

 

Information Security Responsibilities

 

  • Understand and adhere to Information Security policies, guidelines and procedure, practice them for protection of organizational data and Information System.
  • Take part in Information Security training and act accordingly while handling information.
  • Report all suspected security and policy breach to Infosec team or appropriate authority (CISO).

EEOC

 

Antuit.ai is an at-will, equal opportunity employer.  We consider applicants for all positions without regard to race, color, religion, national origin or ancestry, gender identity, sex, age (40+), marital status, disability, veteran status, or any other legally protected status under local, state, or federal law.
Read more
Job posted by
Purnendu Shakunt

Data Scientist

at Yottaasys AI LLC

Founded 2018  •  Product  •  0-20 employees  •  Raised funding
Data Science
Deep Learning
R Programming
Python
Machine Learning (ML)
Video compression
Data Analytics
icon
Bengaluru (Bangalore), Singapore
icon
2 - 5 yrs
icon
₹9L - ₹20L / yr
We are a US Headquartered Product Company looking to Hire a few Passionate Deep Learning and Computer Vision Team Players with 2-5 years of experience! If you are any of these:
1. Expert in deep learning and machine learning techniques,
2. Extremely Good in image/video processing,
3. Have a Good understanding of Linear algebra, Optimization techniques, Statistics and pattern recognition.
Then u r the right fit for this position.
Read more
Job posted by
Dinesh Krishnan

Machine Learning Engineer

at CloudMoyo

Founded 2015  •  Products & Services  •  100-1000 employees  •  Profitable
Machine Learning (ML)
Python
Artificial Intelligence (AI)
Deep Learning
Natural Language Processing (NLP)
Neo4J
Data Visualization
Data Science
icon
Pune
icon
10 - 16 yrs
icon
₹10L - ₹20L / yr

Job Description:

Roles & Responsibilities:

· You will be involved in every part of the project lifecycle, right from identifying the business problem and proposing a solution, to data collection, cleaning, and preprocessing, to training and optimizing ML/DL models and deploying them to production.

· You will often be required to design and execute proof-of-concept projects that can demonstrate business value and build confidence with CloudMoyo’s clients.

· You will be involved in designing and delivering data visualizations that utilize the ML models to generate insights and intuitively deliver business value to CXOs.


Desired Skill Set:

· Candidates should have strong Python coding skills and be comfortable working with various ML/DL frameworks and libraries.

· Hands-on skills and industry experience in one or more of the following areas is necessary:

1)      Deep Learning (CNNs/RNNs, Reinforcement Learning, VAEs/GANs)

2)      Machine Learning (Regression, Random Forests, SVMs, K-means, ensemble methods)

3)      Natural Language Processing

4)      Graph Databases (Neo4j, Apache Giraph)

5)      Azure Bot Service

6)      Azure ML Studio / Azure Cognitive Services

7)      Log Analytics with NLP/ML/DL

· Previous experience with data visualization, C# or Azure Cloud platform and services will be a plus.

· Candidates should have excellent communication skills and be highly technical, with the ability to discuss ideas at any level from executive to developer.

· Creative problem-solving, unconventional approaches and a hacker mindset is highly desired.

Read more
Job posted by
Sarabjeet Singh

Artificial Intelligence Developers

at Precily Private Limited

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
Data Science
Artificial Neural Network (ANN)
Artificial Intelligence (AI)
Machine Learning (ML)
Python
TensorFlow
Natural Language Processing (NLP)
Big Data
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 3 yrs
icon
₹3L - ₹9L / yr
-Precily AI: Automatic summarization, shortening a business document, book with our AI. Create a summary of the major points of the original document. AI can make a coherent summary taking into account variables such as length, writing style, and syntax. We're also working in the legal domain to reduce the high number of pending cases in India. We use Artificial Intelligence and Machine Learning capabilities such as NLP, Neural Networks in Processing the data to provide solutions for various industries such as Enterprise, Healthcare, Legal.
Read more
Job posted by
Bharath Rao

Enthusiastic Cloud-ML Engineers with a keen sense of curiosity

at Talent Sculpt

Founded 2012  •  Products & Services  •  100-1000 employees  •  Raised funding
Java
Python
Spark
Hadoop
MongoDB
Scala
Natural Language Processing (NLP)
Machine Learning (ML)
icon
Bengaluru (Bangalore)
icon
3 - 12 yrs
icon
₹3L - ₹25L / yr
We are a start-up in India seeking excellence in everything we do with an unwavering curiosity and enthusiasm. We build simplified new-age AI driven Big Data Analytics platform for Global Enterprises and solve their biggest business challenges. Our Engineers develop fresh intuitive solutions keeping the user in the center of everything. As a Cloud-ML Engineer, you will design and implement ML solutions for customer use cases and problem solve complex technical customer challenges. Expectations and Tasks - Total of 7+ years of experience with minimum of 2 years in Hadoop technologies like HDFS, Hive, MapReduce - Experience working with recommendation engines, data pipelines, or distributed machine learning and experience with data analytics and data visualization techniques and software. - Experience with core Data Science techniques such as regression, classification or clustering, and experience with deep learning frameworks - Experience in NLP, R and Python - Experience in performance tuning and optimization techniques to process big data from heterogeneous sources. - Ability to communicate clearly and concisely across technology and the business teams. - Excellent Problem solving and Technical troubleshooting skills. - Ability to handle multiple projects and prioritize tasks in a rapidly changing environment. Technical Skills Core Java, Multithreading, Collections, OOPS, Python, R, Apache Spark, MapReduce, Hive, HDFS, Hadoop, MongoDB, Scala We are a retained Search Firm employed by our client - Technology Start-up @ Bangalore. Interested candidates can share their resumes with me - [email protected] I will respond to you within 24 hours. Online assessments and pre-employment screening are part of the selection process.
Read more
Job posted by
Blitzkrieg HR Consulting
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Bookr Inc?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort