Data Engineer

at 1CH

DP
Posted by Sathish Sukumar
icon
Chennai, Bengaluru (Bangalore), Hyderabad, NCR (Delhi | Gurgaon | Noida), Mumbai, Pune
icon
4 - 15 yrs
icon
₹10L - ₹25L / yr
icon
Full time
Skills
Data engineering
Data engineer
ETL
SSIS
ADF
azure data factory
Azure
SQL
  • Expertise in designing and implementing enterprise scale database (OLTP) and Data warehouse solutions.
  • Hands on experience in implementing Azure SQL Database, Azure SQL Date warehouse (Azure Synapse Analytics) and big data processing using Azure Databricks and Azure HD Insight.
  • Expert in writing T-SQL programming for complex stored procedures, functions, views and query optimization.
  • Should be aware of Database development for both on-premise and SAAS Applications using SQL Server and PostgreSQL.
  • Experience in ETL and ELT implementations using Azure Data Factory V2 and SSIS.
  • Experience and expertise in building machine learning models using Logistic and linear regression, Decision tree  and Random forest Algorithms.
  • PolyBase queries for exporting and importing data into Azure Data Lake.
  • Building data models both tabular and multidimensional using SQL Server data tools.
  • Writing data preparation, cleaning and processing steps using Python, SCALA, and R.
  • Programming experience using python libraries NumPy, Pandas and Matplotlib.
  • Implementing NOSQL databases and writing queries using cypher.
  • Designing end user visualizations using Power BI, QlikView and Tableau.
  • Experience working with all versions of SQL Server 2005/2008/2008R2/2012/2014/2016/2017/2019
  • Experience using the expression languages MDX and DAX.
  • Experience in migrating on-premise SQL server database to Microsoft Azure.
  • Hands on experience in using Azure blob storage, Azure Data Lake Storage Gen1 and Azure Data Lake Storage Gen2.
  • Performance tuning complex SQL queries, hands on experience using SQL Extended events.
  • Data modeling using Power BI for Adhoc reporting.
  • Raw data load automation using T-SQL and SSIS
  • Expert in migrating existing on-premise database to SQL Azure.
  • Experience in using U-SQL for Azure Data Lake Analytics.
  • Hands on experience in generating SSRS reports using MDX.
  • Experience in designing predictive models using Python and SQL Server.
  • Developing machine learning models using Azure Databricks and SQL Server

About 1CH

Founded
2018
Type
Services
Size
20-100 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Data Engineering Head

at A FinTech NBFC dedicated to driving Financial inclusion

Agency job
via Jobdost
Data engineering
Spark
Big Data
Data engineer
Hadoop
Spring
Javascript
NodeJS (Node.js)
Amazon Web Services (AWS)
Python
Flask
Java
Express
MongoDB
SQL
NOSQL Databases
DynamoDB
data savviness
icon
Bengaluru (Bangalore)
icon
8 - 12 yrs
icon
₹20L - ₹25L / yr
  • Play a critical role as a member of the leadership team in shaping and supporting our overall company vision, day-to-day operations, and culture.
  • Set the technical vision and build the technical product roadmap from launch to scale; including defining long-term goals and strategies
  • Define best practices around coding methodologies, software development, and quality assurance
  • Define innovative technical requirements and systems while balancing time, feasibility, cost and customer experience
  • Build and support production products
  • Ensure our internal processes and services comply with privacy and security regulations
  • Establish a high performing, inclusive engineering culture focused on innovation, execution, growth and development
  • Set a high bar for our overall engineering practices in support of our mission and goals
  • Develop goals, roadmaps and delivery dates to help us scale quickly and sustainably
  • Collaborate closely with Product, Business, Marketing and Data Science
  • Experience with financial and transactional systems
  • Experience engineering for large volumes of data at scale
  • Experience with financial audit and compliance is a plus
  • Experience building a successful consumer facing web and mobile apps at scale
Job posted by
Mamatha A

Senior Data Scientist

at Top 3 Fintech Startup

Agency job
via Jobdost
Machine Learning (ML)
Data Science
Natural Language Processing (NLP)
Computer Vision
Python
DevOps
SQL
Git
Amazon Web Services (AWS)
PySpark
Postman
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹11L - ₹17L / yr
Responsible to lead a team of analysts to build and deploy predictive models to infuse core business functions with deep analytical insights. The Senior Data Scientist will also work
closely with the Kinara management team to investigate strategically important business
questions.

Lead a team through the entire analytical and machine learning model life cycle:

 Define the problem statement
 Build and clean datasets
 Exploratory data analysis
 Feature engineering
 Apply ML algorithms and assess the performance
 Code for deployment
 Code testing and troubleshooting
 Communicate Analysis to Stakeholders
 Manage Data Analysts and Data Scientists
Job posted by
Shalaka ZawarRathi

Senior Technical Consultant

at ketteq

Founded 2018  •  Products & Services  •  20-100 employees  •  Bootstrapped
ETL
SQL
PostgreSQL
icon
Remote only
icon
5 - 15 yrs
icon
₹20L - ₹35L / yr

ketteQ is a supply chain planning and automation platform. We are looking for extremely strong and experienced Technical Consultant to help with system design, data engineering and software configuration and testing during the implementation of supply chain planning solutions. This job comes with a very attractive compensation package, and work-from-home benefit. If you are high-energy, motivated, and initiative-taking individual then this could be a fantastic opportunity for you.

 

Responsible for technical design and implementation of supply chain planning solutions.   

 

 

Responsibilities

  • Design and document system architecture
  • Design data mappings
  • Develop integrations
  • Test and validate data
  • Develop customizations
  • Deploy solution
  • Support demo development activities

Requirements

  • Minimum 5 years experience in technical implementation of Enterprise software preferably Supply Chain Planning software
  • Proficiency in ANSI/postgreSQL
  • Proficiency in ETL tools such as Pentaho, Talend, Informatica, and Mulesoft
  • Experience with Webservices and REST APIs
  • Knowledge of AWS
  • Salesforce and Tableau experience a plus
  • Excellent analytical skills
  • Must possess excellent verbal and written communication skills and be able to communicate effectively with international clients
  • Must be a self-starter and highly motivated individual who is looking to make a career in supply chain management
  • Quick thinker with proven decision-making and organizational skills
  • Must be flexible to work non-standard hours to accommodate globally dispersed teams and clients

Education

  • Bachelors in Engineering from a top-ranked university with above average grades
Job posted by
Nikhil Jain

Data Engineer

at AI-powered cloud-based SaaS solution

Agency job
via wrackle
Data engineering
Big Data
Data Engineer
Big Data Engineer
Hibernate (Java)
Data Structures
Agile/Scrum
SaaS
Cassandra
Spark
Python
NOSQL Databases
Hadoop
HDFS
MapReduce
AWS CloudFormation
EMR
Amazon S3
Apache Kafka
Apache ZooKeeper
Systems Development Life Cycle (SDLC)
Java
YARN
icon
Bengaluru (Bangalore)
icon
2 - 10 yrs
icon
₹15L - ₹50L / yr
Responsibilities

● Able contribute to the gathering of functional requirements, developing technical
specifications, and project & test planning
● Demonstrating technical expertise, and solving challenging programming and design
problems
● Roughly 80% hands-on coding
● Generate technical documentation and PowerPoint presentations to communicate
architectural and design options, and educate development teams and business users
● Resolve defects/bugs during QA testing, pre-production, production, and post-release
patches
● Work cross-functionally with various bidgely teams including: product management,
QA/QE, various product lines, and/or business units to drive forward results

Requirements
● BS/MS in computer science or equivalent work experience
● 2-4 years’ experience designing and developing applications in Data Engineering
● Hands-on experience with Big data Eco Systems.
● Hadoop,Hdfs,Map Reduce,YARN,AWS Cloud, EMR, S3, Spark, Cassandra, Kafka,
Zookeeper
● Expertise with any of the following Object-Oriented Languages (OOD): Java/J2EE,Scala,
Python
● Strong leadership experience: Leading meetings, presenting if required
● Excellent communication skills: Demonstrated ability to explain complex technical
issues to both technical and non-technical audiences
● Expertise in the Software design/architecture process
● Expertise with unit testing & Test-Driven Development (TDD)
● Experience on Cloud or AWS is preferable
● Have a good understanding and ability to develop software, prototypes, or proofs of
concepts (POC's) for various Data Engineering requirements.
Job posted by
Naveen Taalanki

Data Science Core Developer

at Kwalee

Founded 2011  •  Product  •  100-500 employees  •  Profitable
Data Analytics
Data Science
Python
NOSQL Databases
SQL
icon
Bengaluru (Bangalore)
icon
3 - 15 yrs
icon
Best in industry

Kwalee is one of the world’s leading multiplatform game publishers and developers, with well over 750 million downloads worldwide for mobile hits such as Draw It, Teacher Simulator, Let’s Be Cops 3D, Traffic Cop 3D and Makeover Studio 3D. Alongside this, we also have a growing PC and Console team of incredible pedigree that is on the hunt for great new titles to join TENS!, Eternal Hope and Die by the Blade. 

With a team of talented people collaborating daily between our studios in Leamington Spa, Bangalore and Beijing, or on a remote basis from Turkey, Brazil, the Philippines and many more places, we have a truly global team making games for a global audience. And it’s paying off: Kwalee games have been downloaded in every country on earth! If you think you’re a good fit for one of our remote vacancies, we want to hear from you wherever you are based.

Founded in 2011 by David Darling CBE, a key architect of the UK games industry who previously co-founded and led Codemasters for many years, our team also includes legends such as Andrew Graham (creator of Micro Machines series) and Jason Falcus (programmer of classics including NBA Jam) alongside a growing and diverse team of global gaming experts. Everyone contributes creatively to Kwalee’s success, with all employees eligible to pitch their own game ideas on Creative Wednesdays, and we’re proud to have built our success on this inclusive principle. Could your idea be the next global hit?

What’s the job?

As a Data Science Core Developer you will build tools and develop technology that deliver data science products to a team of strategists, marketing experts and game developers.


What you will be doing

  • Create analytical tools, from simple scripts to full stack applications.
  • Develop successful prototype tools into highly tested automated programs
  • Work with the marketing, publishing and development teams to understand the problems they are facing, how to solve them and deliver products that are understandable to non-data scientists
  • Solve challenging data management and data flow problems to fuel Kwalee’s analysis


How you will be doing this

  • You’ll be part of an agile, multidisciplinary and creative team and work closely with them to ensure the best results.
  • You'll think creatively and be motivated by challenges and constantly striving for the best.
  • You’ll work with cutting edge technology, if you need software or hardware to get the job done efficiently, you will get it. We even have a robot!


Team

Our talented team is our signature. We have a highly creative atmosphere with more than 200 staff where you’ll have the opportunity to contribute daily to important decisions. You’ll work within an extremely experienced, passionate and diverse team, including David Darling and the creator of the Micro Machines video games.


Skills and Requirements

  • A proven track record of writing high quality program code in Python
  • Experience with machine learning python frameworks and libraries such as Tensorflow and Scikit-Learn
  • The ability to write quick scripts to accelerate manual tasks
  • Knowledge of NoSQL and SQL databases like Couchbase, Elasticsearch and PostgreSQL  will be helpful but not necessary
  • An avid interest in the development, marketing and monetisation of mobile games


We offer

  • We want everyone involved in our games to share our success, that’s why we have a generous team profit sharing scheme from day 1 of employment
  • In addition to a competitive salary we also offer private medical cover and life assurance
  • Creative Wednesdays! (Design and make your own games every Wednesday)
  • 20 days of paid holidays plus bank holidays 
  • Hybrid model available depending on the department and the role
  • Relocation support available 
  • Great work-life balance with flexible working hours
  • Quarterly team building days - work hard, play hard!
  • Monthly employee awards
  • Free snacks, fruit and drinks


Our philosophy

We firmly believe in creativity and innovation and that a fundamental requirement for a successful and happy company is having the right mix of individuals. With the right people in the right environment anything and everything is possible.

Kwalee makes games to bring people, their stories, and their interests together. As an employer, we’re dedicated to making sure that everyone can thrive within our team by welcoming and supporting people of all ages, races, colours, beliefs, sexual orientations, genders and circumstances. With the inclusion of diverse voices in our teams, we bring plenty to the table that’s fresh, fun and exciting; it makes for a better environment and helps us to create better games for everyone! This is how we move forward as a company – because these voices are the difference that make all the difference.

Job posted by
Michael Hoppitt

Data Engineer -Lead

at Vahak

Founded 2016  •  Product  •  20-100 employees  •  Raised funding
SQL
Data engineering
Big Data
Python
R Language
Meta-data management
Data Analytics
Tableau
PowerBI
icon
Bengaluru (Bangalore)
icon
4 - 12 yrs
icon
₹15L - ₹40L / yr

Who Are We?

 

Vahak (https://www.vahak.in) is India’s largest & most trusted online transport marketplace & directory for road transport businesses and individual commercial vehicle (Trucks, Trailers, Containers, Hyva, LCVs) owners for online truck and load booking, transport business branding and transport business network expansion. Lorry owners can find intercity and intracity loads from all over India and connect with other businesses to find trusted transporters and best deals in the Indian logistics services market. With the Vahak app, users can book loads and lorries from a live transport marketplace with over 7 Lakh + Transporters and Lorry owners in over 10,000+ locations for daily transport requirements.

Vahak has raised a capital of $5+ Million in a Pre-Series A round from RTP Global along with participation from Luxor Capital and Leo Capital. The other marquee angel investors include Kunal Shah, Founder and CEO, CRED; Jitendra Gupta, Founder and CEO, Jupiter; Vidit Aatrey and Sanjeev Barnwal, Co-founders, Meesho; Mohd Farid, Co-founder, Sharechat; Amrish Rau, CEO, Pine Labs; Harsimarbir Singh, Co-founder, Pristyn Care; Rohit and Kunal Bahl, Co-founders, Snapdeal; and Ravish Naresh, Co-founder and CEO, Khatabook.

 

Lead Data Engineer:

We at Vahak, are looking for an enthusiastic and passionate Data Engineering lead to join our young & diverse team.You will play a key role in the data science group, working with state of the art big data technologies, building pipelines for various data sources and developing organization’s data late and data warehouse

Our goal as a group is to drive powerful, big data analytics products with scalable results.We love people who are humble and collaborative with hunger for excellence.

Responsibilities

  • Should act as a technical resource for the Data Science team and be involved in creating and implementing current and future Analytics projects like data lake design, data warehouse design, etc.
  • Analysis and design of ETL solutions to store/fetch data from multiple systems like Google Analytics, CleverTap, CRM systems etc.
  • Developing and maintaining data pipelines for real time analytics as well as batch analytics use cases.
  • Collaborate with data scientists and actively work in the feature engineering and data preparation phase of model building
  • Collaborate with product development and dev ops teams in implementing the data collection and aggregation solutions
  • Ensure quality and consistency of the data in Data warehouse and follow best data governance practices
  • Analyze large amounts of information to discover trends and patterns
  • Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.

Requirements:

  • Bachelor’s or Masters in a highly numerate discipline such as Engineering, Science and Economics
  • 5+ years of proven experience working as a Data Engineer preferably in ecommerce/web based or consumer technologies company
  • Hands on experience of working with different big data tools like Hadoop, Spark , Flink, Kafka and so on
  • Good understanding of AWS ecosystem for big data analytics
  • Hands on experience in creating data pipelines either using tools or by independently writing scripts
  • Hands on experience in scripting languages like  Python, Scala, Unix Shell scripting and so on
  • Strong problem solving skills with an emphasis on product development.
  • Experience using business intelligence tools e.g. Tableau, Power BI would be an added advantage (not mandatory)

 

Job posted by
Vahak Talent

Sr. Software Engineer

at Cliffai

Founded 2017  •  Product  •  20-100 employees  •  Profitable
ETL
Informatica
Data Warehouse (DWH)
Python
SQL
NOSQL Databases
Object Oriented Programming (OOPs)
RabbitMQ
API
icon
Indore
icon
2 - 4 yrs
icon
₹7L - ₹12L / yr


We are looking for

A Senior Software Development Engineer (SDE2) who will be instrumental in the design and development of our backend technology, which manages our exhaustive data pipelines and AI models. Simplifying complexity and building technology that is robust and scalable is your North Star. You'll work closely alongside our CTO and machine learning engineers, frontend and wider technical team to build new capabilities, focused on speed and reliability.

You'll own your work, to build, test and iterate quickly, with direct guidance from our CTO.

Please note: You must have proven industry experience greater than 2 years.

Your work includes

  • Own and manage the whole engineering infrastructure that supports Greendeck platform.
  • Work to create highly scalable, highly robust and highly available python micro-services.
  • Design the architecture to stream data on a huge scale across multiple services.
  • Create and manage data pipelines using tools like Kafka, Celery.
  • Deploy Serverless functions to process and manage data.
  • Work with variety of databases and storage systems to store and strategically manage data.
  • Write connections to collect data from various third party services, data storages and APIs.

 

 

Skills/ Requirements

  • Strong experience in python creating scripts or apps or services
  • Strong automation and scripting skills
  • Knowledge of at least one SQL and No-SQL Database
  • Experience of working with messaging systems like Kafka, RabbitMQ
  • Good knowledge about data-frames and data-manipulation
  • Have used and deployed apps using FastAPI or Flask or similar tech
  • Knowledge of CI/CD paradigm
  • Basic knowledge about Docker
  • Have knowledge of creating and using REST APIs
  • Good knowledge of OOP Fundamentals.
  • (Optional) Knowledge about Celery/ Airflow
  • (Optional) Knowledge about Lambda/ Serverless
  • (Optional) Have connected apps using OAuth


What you can expect

  • Attractive pay, bonus scheme and flexible vacation policy.
  • A truly flexible, trust-based, performance driven work culture.
  • Lunch is on us, everyday!
  • A young and passionate team building elegant products with intricate technology for the future of businesses around the world. Our average age is 25!
  • The chance to make a huge difference to the success of a world-class SaaS product and the opportunity to make an impact.


Its important to us

  • That you relocate to Indore
  • That you have a minimum of 2 years of experience working as a Software Developer



Job posted by
Vranda Baheti

Power BI Developer

at Gulf client

Agency job
via Fragma Data Systems
PowerBI
Data Warehouse (DWH)
SQL
DAX
Power query
icon
Remote, Bengaluru (Bangalore)
icon
5 - 9 yrs
icon
₹10L - ₹20L / yr
Key Skills:
 Strong knowledge in Power BI (DAX + Power Query + Power BI Service + Power BI
Desktop Visualisations) and Azure Data Storages.
 Should have experience in Power BI mobile Dashboards.
 Strong knowledge in SQL.
 Good knowledge of DWH concepts.
 Work as an independent contributor at the client location.
 Implementing Access Control and impose required Security.
 Candidate must have very good communication skills.
Job posted by
Priyanka U

Data Engineer

at INSOFE

Founded 2011  •  Services  •  100-1000 employees  •  Profitable
Big Data
Data engineering
Apache Hive
Apache Spark
Hadoop
Amazon Web Services (AWS)
Java
SQL
Python
icon
Hyderabad, Bengaluru (Bangalore)
icon
7 - 10 yrs
icon
₹12L - ₹18L / yr
Roles & Responsibilities:
  • Total Experience of 7-10 years and should be interested in teaching and research
  • 3+ years’ experience in data engineering which includes data ingestion, preparation, provisioning, automated testing, and quality checks.
  • 3+ Hands-on experience in Big Data cloud platforms like AWS and GCP, Data Lakes and Data Warehouses
  • 3+ years of Big Data and Analytics Technologies. Experience in SQL, writing code in spark engine using python, scala or java Language. Experience in Spark, Scala
  • Experience in designing, building, and maintaining ETL systems
  • Experience in data pipeline and workflow management tools like Airflow
  • Application Development background along with knowledge of Analytics libraries, opensource Natural Language Processing, statistical and big data computing libraries
  • Familiarity with Visualization and Reporting Tools like Tableau, Kibana.
  • Should be good at storytelling in Technology
Please note that candidates should be interested in teaching and research work.

Qualification: B.Tech / BE / M.Sc / MBA / B.Sc, Having Certifications in Big Data Technologies and Cloud platforms like AWS, Azure and GCP will be preferred
Primary Skills: Big Data + Python + Spark + Hive + Cloud Computing
Secondary Skills: NoSQL+ SQL + ETL + Scala + Tableau
Selection Process: 1 Hackathon, 1 Technical round and 1 HR round
Benefit: Free of cost training on Data Science from top notch professors
Job posted by
Nitika Bist

Big Data Developer

at cemtics

Founded 2015  •  Services  •  100-1000 employees  •  Profitable
Big Data
Spark
Hadoop
SQL
Python
Relational Database (RDBMS)
icon
Remote, NCR (Delhi | Gurgaon | Noida)
icon
4 - 6 yrs
icon
₹5L - ₹12L / yr

JD:

Required Skills:

  • Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
  • Strong practical knowledge of SQL.
    Hands on experience on Spark/SparkSQL
  • Data Structure and Algorithms
  • Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
  • Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
  • Experience on NoSQL Databases like HBase, etc
  • Experience with Linux OS environment (Shell script, AWK, SED)
  • Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Job posted by
Tapan Sahani
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at 1CH?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort