Data Engineer / Analyst

at A Product Company

Agency job
icon
Bengaluru (Bangalore)
icon
3 - 6 yrs
icon
₹15L - ₹26L / yr (ESOP available)
icon
Full time
Skills
Looker
Big Data
Hadoop
Spark
Apache Hive
Data Analytics
Data Warehouse (DWH)
SQL
Amazon Web Services (AWS)
Job Title: Senior Data Engineer/Analyst
Location: Bengaluru
Department: - Engineering 

Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work. 

Responsibilities 
● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
●  Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions 
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers. 

Requirements 
● 3-5 years of strong experience in data analytics and in developing data pipelines. 
● Very good expertise in Looker 
● Strong in data modeling, developing SQL queries and optimizing queries. 
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive). 
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera) 
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

SQL Lead

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
PL/SQL
MySQL
SQL server
SQL
Linux/Unix
Shell Scripting
Data modeling
Data Warehouse (DWH)
ETL
icon
Pune, Hyderabad
icon
6 - 12 yrs
icon
₹11L - ₹25L / yr

We at Datametica Solutions Private Limited are looking for an SQL Lead / Architect who has a passion for the cloud with knowledge of different on-premises and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.

Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.



Job Description :

Experience: 6+ Years

Work Location: Pune / Hyderabad



Technical Skills :

  • Good programming experience as an Oracle PL/SQL, MySQL, and SQL Server Developer
  • Knowledge of database performance tuning techniques
  • Rich experience in a database development
  • Experience in Designing and Implementation Business Applications using the Oracle Relational Database Management System
  • Experience in developing complex database objects like Stored Procedures, Functions, Packages and Triggers using SQL and PL/SQL
  •  

Required Candidate Profile :

  • Excellent communication, interpersonal, analytical skills and strong ability to drive teams
  • Analyzes data requirements and data dictionary for moderate to complex projects • Leads data model related analysis discussions while collaborating with Application Development teams, Business Analysts, and Data Analysts during joint requirements analysis sessions
  • Translate business requirements into technical specifications with an emphasis on highly available and scalable global solutions
  • Stakeholder management and client engagement skills
  • Strong communication skills (written and verbal)

About Us!

A global leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.

We have our own products!

Eagle Data warehouse Assessment & Migration Planning Product

Raven Automated Workload Conversion Product

Pelican Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.



Why join us!

Datametica is a place to innovate, bring new ideas to live, and learn new things. We believe in building a culture of innovation, growth, and belonging. Our people and their dedication over these years are the key factors in achieving our success.



Benefits we Provide!

Working with Highly Technical and Passionate, mission-driven people

Subsidized Meals & Snacks

Flexible Schedule

Approachable leadership

Access to various learning tools and programs

Pet Friendly

Certification Reimbursement Policy



Check out more about us on our website below!

www.datametica.com

Read more
Job posted by
Sayali Kachi

Data Architect

at Hypersonix Inc

Founded 2018  •  Product  •  100-500 employees  •  Profitable
Big Data
Data Warehouse (DWH)
Apache Kafka
Spark
Hadoop
Data engineering
Artificial Intelligence (AI)
Machine Learning (ML)
Data Structures
Data modeling
Data wrangling
Data integration
Data-driven testing
Database performance tuning
Apache Storm
Python
Scala
SQL
Amazon Web Services (AWS)
SQL Azure
kafka
databricks
Flinks
druid
Airflow
Luigi
Nifi
Talend
icon
Bengaluru (Bangalore)
icon
10 - 15 yrs
icon
₹15L - ₹20L / yr
Hypersonix.ai is disrupting the Business Intelligence and Analytics space with AI, ML and NLP capabilities to drive specific business insights with a conversational user experience. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in Restaurants, Hospitality and other industry verticals.

Hypersonix.ai is seeking a Data Evangelist who can work closely with customers to understand the data sources, acquire data and drive product success by delivering insights based on customer needs.

Primary Responsibilities :

- Lead and deliver complete application lifecycle design, development, deployment, and support for actionable BI and Advanced Analytics solutions

- Design and develop data models and ETL process for structured and unstructured data that is distributed across multiple Cloud platforms

- Develop and deliver solutions with data streaming capabilities for a large volume of data

- Design, code and maintain parts of the product and drive customer adoption

- Build data acquisition strategy to onboard customer data with speed and accuracy

- Working both independently and with team members to develop, refine, implement, and scale ETL processes

- On-going support and maintenance of live-clients for their data and analytics needs

- Defining the data automation architecture to drive self-service data load capabilities

Required Qualifications :

- Bachelors/Masters/Ph.D. in Computer Science, Information Systems, Data Science, Artificial Intelligence, Machine Learning or related disciplines

- 10+ years of experience guiding the development and implementation of Data architecture in structured, unstructured, and semi-structured data environments.

- Highly proficient in Big Data, data architecture, data modeling, data warehousing, data wrangling, data integration, data testing and application performance tuning

- Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Flink, Storm, Druid and Hadoop

- Strong with hands-on programming and scripting for Big Data ecosystem (Python, Scala, Spark, etc)

- Experience building batch and streaming ETL data pipelines using workflow management tools like Airflow, Luigi, NiFi, Talend, etc

- Familiarity with cloud-based platforms like AWS, Azure or GCP

- Experience with cloud data warehouses like Redshift and Snowflake

- Proficient in writing complex SQL queries.

- Excellent communication skills and prior experience of working closely with customers

- Data savvy who loves to understand large data trends and obsessed with data analysis

- Desire to learn about, explore, and invent new tools for solving real-world problems using data

Desired Qualifications :

- Cloud computing experience, Amazon Web Services (AWS)

- Prior experience in Data Warehousing concepts, multi-dimensional data models

- Full command of Analytics concepts including Dimension, KPI, Reports & Dashboards

- Prior experience in managing client implementation of Analytics projects

- Knowledge and prior experience of using machine learning tools
Read more
Job posted by
Gowshini Maheswaran

Senior Data Engineer

at InnovAccer

Founded 2014  •  Products & Services  •  100-1000 employees  •  Profitable
ETL
SQL
Data Warehouse (DWH)
Informatica
Datawarehousing
ELT
SSIS
icon
Noida, Bengaluru (Bangalore), Pune, Hyderabad
icon
4 - 7 yrs
icon
₹4L - ₹16L / yr

We are looking for a Senior Data Engineer to join the Customer Innovation team, who will be responsible for acquiring, transforming, and integrating customer data onto our Data Activation Platform from customers’ clinical, claims, and other data sources. You will work closely with customers to build data and analytics solutions to support their business needs, and be the engine that powers the partnership that we build with them by delivering high-fidelity data assets.

In this role, you will work closely with our Product Managers, Data Scientists, and Software Engineers to build the solution architecture that will support customer objectives. You'll work with some of the brightest minds in the industry, work with one of the richest healthcare data sets in the world, use cutting-edge technology, and see your efforts affect products and people on a regular basis. The ideal candidate is someone that

  • Has healthcare experience and is passionate about helping heal people,
  • Loves working with data,
  • Has an obsessive focus on data quality,
  • Is comfortable with ambiguity and making decisions based on available data and reasonable assumptions,
  • Has strong data interrogation and analysis skills,
  • Defaults to written communication and delivers clean documentation, and,
  • Enjoys working with customers and problem solving for them.

A day in the life at Innovaccer:

  • Define the end-to-end solution architecture for projects by mapping customers’ business and technical requirements against the suite of Innovaccer products and Solutions.
  • Measure and communicate impact to our customers.
  • Enabling customers on how to activate data themselves using SQL, BI tools, or APIs to solve questions they have at speed.

What You Need:

  • 4+ years of experience in a Data Engineering role, a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field.
  • 4+ years of experience working with relational databases like Snowflake, Redshift, or Postgres.
  • Intermediate to advanced level SQL programming skills.
  • Data Analytics and Visualization (using tools like PowerBI)
  • The ability to engage with both the business and technical teams of a client - to document and explain technical problems or concepts in a clear and concise way.
  • Ability to work in a fast-paced and agile environment.
  • Easily adapt and learn new things whether it’s a new library, framework, process, or visual design concept.

What we offer:

  • Industry certifications: We want you to be a subject matter expert in what you do. So, whether it’s our product or our domain, we’ll help you dive in and get certified.
  • Quarterly rewards and recognition programs: We foster learning and encourage people to take risks. We recognize and reward your hard work.
  • Health benefits: We cover health insurance for you and your loved ones.
  • Sabbatical policy: We encourage people to take time off and rejuvenate, learn new skills, and pursue their interests so they can generate new ideas with Innovaccer.
  • Pet-friendly office and open floor plan: No boring cubicles.
Read more
Job posted by
Jyoti Kaushik

Machine Learning Engineer - NLP

at Leena AI

Founded 2015  •  Product  •  20-100 employees  •  Raised funding
Natural Language Processing (NLP)
Machine Learning (ML)
Data Science
Data Analytics
icon
Remote only
icon
2 - 8 yrs
icon
₹25L - ₹40L / yr

Responsibilities: 

  • Improve robustness of Leena AI current NLP stack 
  • Increase zero shot learning capability of Leena AI current NLP stack 
  • Opportunity to add/build new NLP architectures based on requirements 
  • Manage End to End lifecycle of the data in the system till it achieves more than 90% accuracy 
  • Manage a NLP team 

Page BreakRequirements: 

  • Strong understanding of linear algebra, optimisation, probability, statistics 
  • Experience in the data science methodology from exploratory data analysis, feature engineering, model selection, deployment of the model at scale and model evaluation 
  • Experience in deploying NLP architectures in production 
  • Understanding of latest NLP architectures like transformers is good to have 
  • Experience in adversarial attacks/robustness of DNN is good to have 
  • Experience with Python Web Framework (Django), Analytics and Machine Learning frameworks like Tensorflow/Keras/Pytorch. 
Read more
Job posted by
Preethi Gothandam

Data Engineer (Azure)

at Scry Analytics

Founded 2015  •  Product  •  100-500 employees  •  Profitable
PySpark
Data engineering
Big Data
Hadoop
Spark
Windows Azure
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
SQL
NOSQL Databases
Apache Kafka
icon
Remote only
icon
3 - 8 yrs
icon
₹15L - ₹20L / yr

Title: Data Engineer (Azure) (Location: Gurgaon/Hyderabad)

Salary: Competitive as per Industry Standard

We are expanding our Data Engineering Team and hiring passionate professionals with extensive

knowledge and experience in building and managing large enterprise data and analytics platforms. We

are looking for creative individuals with strong programming skills, who can understand complex

business and architectural problems and develop solutions. The individual will work closely with the rest

of our data engineering and data science team in implementing and managing Scalable Smart Data

Lakes, Data Ingestion Platforms, Machine Learning and NLP based Analytics Platforms, Hyper-Scale

Processing Clusters, Data Mining and Search Engines.

What You’ll Need:

  • 3+ years of industry experience in creating and managing end-to-end Data Solutions, Optimal

Data Processing Pipelines and Architecture dealing with large volume, big data sets of varied

data types.

  • Proficiency in Python, Linux and shell scripting.
  • Strong knowledge of working with PySpark dataframes, Pandas dataframes for writing efficient pre-processing and other data manipulation tasks.
    ● Strong experience in developing the infrastructure required for data ingestion, optimal

extraction, transformation, and loading of data from a wide variety of data sources using tools like Azure Data Factory,  Azure Databricks (or Jupyter notebooks/ Google Colab) (or other similiar tools).

  • Working knowledge of github or other version control tools.
  • Experience with creating Restful web services and API platforms.
  • Work with data science and infrastructure team members to implement practical machine

learning solutions and pipelines in production.

  • Experience with cloud providers like Azure/AWS/GCP.
  • Experience with SQL and NoSQL databases. MySQL/ Azure Cosmosdb / Hbase/MongoDB/ Elasticsearch etc.
  • Experience with stream-processing systems: Spark-Streaming, Kafka etc and working experience with event driven architectures.
  • Strong analytic skills related to working with unstructured datasets.

 

Good to have (to filter or prioritize candidates)

  • Experience with testing libraries such as pytest for writing unit-tests for the developed code.
  • Knowledge of Machine Learning algorithms and libraries would be good to have,

implementation experience would be an added advantage.

  • Knowledge and experience of Datalake, Dockers and Kubernetes would be good to have.
  • Knowledge of Azure functions , Elastic search etc will be good to have.

 

  • Having experience with model versioning (mlflow) and data versioning will be beneficial
  • Having experience with microservices libraries or with python libraries such as flask for hosting ml services and models would be great.
Read more
Job posted by
Siddarth Thakur

Data Engineer

at Mobile Programming LLC

Founded 1998  •  Services  •  100-1000 employees  •  Profitable
Data Warehouse (DWH)
Big Data
Spark
Apache Kafka
Data engineering
Python
SQL
Linux/Unix
Shell Scripting
DevOps
CI/CD
Docker
Kubernetes
Java
Scala
Data integration
Google Cloud Platform (GCP)
Kafka
Pentaho
icon
Mohali, Gurugram, Bengaluru (Bangalore), Chennai, Hyderabad, Pune
icon
3 - 8 yrs
icon
₹3L - ₹9L / yr
Day-to-day Activities
Develop complex queries, pipelines and software programs to solve analytics and data mining problems
Interact with other data scientists, product managers, and engineers to understand business problems, technical requirements to deliver predictive and smart data solutions
Prototype new applications or data systems
Lead data investigations to troubleshoot data issues that arise along the data pipelines
Collaborate with different product owners to incorporate data science solutions
Maintain and improve data science platform
Must Have
BS/MS/PhD in Computer Science, Electrical Engineering or related disciplines
Strong fundamentals: data structures, algorithms, database
5+ years of software industry experience with 2+ years in analytics, data mining, and/or data warehouse
Fluency with Python
Experience developing web services using REST approaches.
Proficiency with SQL/Unix/Shell
Experience in DevOps (CI/CD, Docker, Kubernetes)
Self-driven, challenge-loving, detail oriented, teamwork spirit, excellent communication skills, ability to multi-task and manage expectations
Preferred
Industry experience with big data processing technologies such as Spark and Kafka
Experience with machine learning algorithms and/or R a plus 
Experience in Java/Scala a plus
Experience with any MPP analytics engines like Vertica
Experience with data integration tools like Pentaho/SAP Analytics Cloud
Read more
Job posted by
Apurva kalsotra

Data Engineer

at Service based company

pandas
PySpark
Big Data
Data engineering
Performance optimixation
oo concepts
SQL
Python
icon
Remote only
icon
3 - 8 yrs
icon
₹8L - ₹13L / yr
Data pre-processing, data transformation, data analysis, and feature engineering, 
The candidate must have Expertise in ADF(Azure data factory), well versed with python.
Performance optimization of scripts (code) and Productionizing of code (SQL, Pandas, Python or PySpark, etc.)
Required skills:
Bachelors in - in Computer Science, Data Science, Computer Engineering, IT or equivalent
Fluency in Python (Pandas), PySpark, SQL, or similar
Azure data factory experience (min 12 months)
Able to write efficient code using traditional, OO concepts, modular programming following the SDLC process.
Experience in production optimization and end-to-end performance tracing (technical root cause analysis)
Ability to work independently with demonstrated experience in project or program management
Azure experience ability to translate data scientist code in Python and make it efficient (production) for cloud deployment
Read more
Job posted by
Sonali Kamani

Big Data/Java Programming

at Dailyhunt

Founded 2007  •  Product  •  500-1000 employees  •  Raised funding
Java
Big Data
Hadoop
Pig
Apache Hive
MapReduce
Elastic Search
MongoDB
Analytics
Scalability
Leadership
Software engineering
Data Analytics
Data domain
Programming
Apache Hadoop
Apache Pig
Communication Skills
icon
Bengaluru (Bangalore)
icon
3 - 9 yrs
icon
₹3L - ₹9L / yr
What You'll Do :- Develop analytic tools, working on BigData and Distributed Environment. Scalability will be the key- Provide architectural and technical leadership on developing our core Analytic platform- Lead development efforts on product features on Java- Help scale our mobile platform as we experience massive growthWhat we Need :- Passion to build analytics & personalisation platform at scale- 3 to 9 years of software engineering experience with product based company in data analytics/big data domain- Passion for the Designing and development from the scratch.- Expert level Java programming and experience leading full lifecycle of application Dev.- Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage- Strong communication skills, verbal and written
Read more
Job posted by
khushboo jain

Predictive Modelling And Optimization Consultant (SCM)

at BRIDGEi2i Analytics Solutions

Founded 2011  •  Products & Services  •  100-1000 employees  •  Profitable
R Programming
Data Analytics
Predictive modelling
Supply Chain Management (SCM)
SQL
MySQL
Python
Statistical Modeling
Supply chain optimization
icon
Bengaluru (Bangalore)
icon
4 - 10 yrs
icon
₹9L - ₹15L / yr

The person holding this position is responsible for leading the solution development and implementing advanced analytical approaches across a variety of industries in the supply chain domain.

At this position you act as an interface between the delivery team and the supply chain team, effectively understanding the client business and supply chain.

Candidates will be expected to lead projects across several areas such as

  • Demand forecasting
  • Inventory management
  • Simulation & Mathematical optimization models.
  • Procurement analytics
  • Distribution/Logistics planning
  • Network planning and optimization

 

Qualification and Experience

  • 4+ years of analytics experience in supply chain – preferable industries hi-tech, consumer technology, CPG, automobile, retail or e-commerce supply chain.
  • Master in Statistics/Economics or MBA or M. Sc./M. Tech with Operations Research/Industrial Engineering/Supply Chain
  • Hands-on experience in delivery of projects using statistical modelling

Skills / Knowledge

  • Hands on experience in statistical modelling software such as R/ Python and SQL.
  • Experience in advanced analytics / Statistical techniques – Regression, Decision tress, Ensemble machine learning algorithms etc. will be considered as an added advantage.
  • Highly proficient with Excel, PowerPoint and Word applications.
  • APICS-CSCP or PMP certification will be added advantage
  • Strong knowledge of supply chain management
  • Working knowledge on the linear/nonlinear optimization
  • Ability to structure problems through a data driven decision-making process.
  • Excellent project management skills, including time and risk management and project structuring.
  • Ability to identify and draw on leading-edge analytical tools and techniques to develop creative approaches and new insights to business issues through data analysis.
  • Ability to liaison effectively with multiple stakeholders and functional disciplines.
  • Experience in Optimization tools like Cplex, ILOG, GAMS will be an added advantage.
Read more
Job posted by
Venniza Glades

Freelance Faculty

at Simplilearn Solutions

Founded 2009  •  Product  •  500-1000 employees  •  Profitable
Java
Amazon Web Services (AWS)
Big Data
Corporate Training
Data Science
Digital Marketing
Hadoop
icon
Anywhere, United States, Canada
icon
3 - 10 yrs
icon
₹2L - ₹10L / yr
To introduce myself I head Global Faculty Acquisition for Simplilearn. About My Company: SIMPLILEARN is a company which has transformed 500,000+ carriers across 150+ countries with 400+ courses and yes we are a Registered Professional Education Provider providing PMI-PMP, PRINCE2, ITIL (Foundation, Intermediate & Expert), MSP, COBIT, Six Sigma (GB, BB & Lean Management), Financial Modeling with MS Excel, CSM, PMI - ACP, RMP, CISSP, CTFL, CISA, CFA Level 1, CCNA, CCNP, Big Data Hadoop, CBAP, iOS, TOGAF, Tableau, Digital Marketing, Data scientist with Python, Data Science with SAS & Excel, Big Data Hadoop Developer & Administrator, Apache Spark and Scala, Tableau Desktop 9, Agile Scrum Master, Salesforce Platform Developer, Azure & Google Cloud. : Our Official website : www.simplilearn.com If you're interested in teaching, interacting, sharing real life experiences and passion to transform Careers, please join hands with us. Onboarding Process • Updated CV needs to be sent to my email id , with relevant certificate copy. • Sample ELearning access will be shared with 15days trail post your registration in our website. • My Subject Matter Expert will evaluate you on your areas of expertise over a telephonic conversation - Duration 15 to 20 minutes • Commercial Discussion. • We will register you to our on-going online session to introduce you to our course content and the Simplilearn style of teaching. • A Demo will be conducted to check your training style, Internet connectivity. • Freelancer Master Service Agreement Payment Process : • Once a workshop/ Last day of the training for the batch is completed you have to share your invoice. • An automated Tracking Id will be shared from our automated ticketing system. • Our Faculty group will verify the details provided and share the invoice to our internal finance team to process your payment, if there are any additional information required we will co-ordinate with you. • Payment will be processed in 15 working days as per the policy this 15 days is from the date of invoice received. Please share your updated CV to get this for next step of on-boarding process.
Read more
Job posted by
STEVEN JOHN
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at A Product Company?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort