Cutshort logo

11+ ITL Jobs in India

Apply to 11+ ITL Jobs on CutShort.io. Find your next job, effortlessly. Browse ITL Jobs and apply today!

icon
Encubate Tech Private Ltd
Mumbai
5 - 6 yrs
₹15L - ₹20L / yr
skill iconAmazon Web Services (AWS)
Amazon Redshift
Data modeling
ITL
Agile/Scrum
+7 more

Roles and

Responsibilities

Seeking AWS Cloud Engineer /Data Warehouse Developer for our Data CoE team to

help us in configure and develop new AWS environments for our Enterprise Data Lake,

migrate the on-premise traditional workloads to cloud. Must have a sound

understanding of BI best practices, relational structures, dimensional data modelling,

structured query language (SQL) skills, data warehouse and reporting techniques.

 Extensive experience in providing AWS Cloud solutions to various business

use cases.

 Creating star schema data models, performing ETLs and validating results with

business representatives

 Supporting implemented BI solutions by: monitoring and tuning queries and

data loads, addressing user questions concerning data integrity, monitoring

performance and communicating functional and technical issues.

Job Description: -

This position is responsible for the successful delivery of business intelligence

information to the entire organization and is experienced in BI development and

implementations, data architecture and data warehousing.

Requisite Qualification

Essential

-

AWS Certified Database Specialty or -

AWS Certified Data Analytics

Preferred

Any other Data Engineer Certification

Requisite Experience

Essential 4 -7 yrs of experience

Preferred 2+ yrs of experience in ETL & data pipelines

Skills Required

Special Skills Required

 AWS: S3, DMS, Redshift, EC2, VPC, Lambda, Delta Lake, CloudWatch etc.

 Bigdata: Databricks, Spark, Glue and Athena

 Expertise in Lake Formation, Python programming, Spark, Shell scripting

 Minimum Bachelor’s degree with 5+ years of experience in designing, building,

and maintaining AWS data components

 3+ years of experience in data component configuration, related roles and

access setup

 Expertise in Python programming

 Knowledge in all aspects of DevOps (source control, continuous integration,

deployments, etc.)

 Comfortable working with DevOps: Jenkins, Bitbucket, CI/CD

 Hands on ETL development experience, preferably using or SSIS

 SQL Server experience required

 Strong analytical skills to solve and model complex business requirements

 Sound understanding of BI Best Practices/Methodologies, relational structures,

dimensional data modelling, structured query language (SQL) skills, data

warehouse and reporting techniques

Preferred Skills

Required

 Experience working in the SCRUM Environment.

 Experience in Administration (Windows/Unix/Network/Database/Hadoop) is a

plus.

 Experience in SQL Server, SSIS, SSAS, SSRS

 Comfortable with creating data models and visualization using Power BI

 Hands on experience in relational and multi-dimensional data modelling,

including multiple source systems from databases and flat files, and the use of

standard data modelling tools

 Ability to collaborate on a team with infrastructure, BI report development and

business analyst resources, and clearly communicate solutions to both

technical and non-technical team members

Read more
Leading Sales Enabler
Agency job
via Qrata by Blessy Fernandes
Bengaluru (Bangalore)
5 - 10 yrs
₹25L - ₹40L / yr
ETL
Spark
skill iconPython
Amazon Redshift
5+ years of experience in a Data Engineer role.
 Proficiency in Linux.
 Must have SQL knowledge and experience working with relational databases,
query authoring (SQL) as well as familiarity with databases including Mysql,
Mongo, Cassandra, and Athena.
 Must have experience with Python/Scala.
 Must have experience with Big Data technologies like Apache Spark.
 Must have experience with Apache Airflow.
 Experience with data pipeline and ETL tools like AWS Glue.
 Experience working with AWS cloud services: EC2, S3, RDS, Redshift.
Read more
Data Warehouse Architect
Agency job
via The Hub by Sridevi Viswanathan
Mumbai
8 - 10 yrs
₹20L - ₹23L / yr
Data Warehouse (DWH)
ETL
Hadoop
Apache Spark
Spark
+4 more
• You will work alongside the Project Management to ensure alignment of plans with what is being
delivered.
• You will utilize your configuration management and software release experience; as well as
change management concepts to drive the success of the projects.
• You will partner with senior leaders to understand and communicate the business needs to
translate them into IT requirements. Consult with Customer’s Business Analysts on their Data
warehouse requirements
• You will assist the technical team in identification and resolution of Data Quality issues.
• You will manage small to medium-sized projects relating to the delivery of applications or
application changes.
• You will use Managed Services or 3rd party resources to meet application support requirements.
• You will interface daily with multi-functional team members within the EDW team and across the
enterprise to resolve issues.
• Recommend and advocate different approaches and designs to the requirements
• Write technical design docs
• Execute Data modelling
• Solution inputs for the presentation layer
• You will craft and generate summary, statistical, and presentation reports; as well as provide reporting and metrics for strategic initiatives.
• Performs miscellaneous job-related duties as assigned

Preferred Qualifications

• Strong interpersonal, teamwork, organizational and workload planning skills
• Strong analytical, evaluative, and problem-solving abilities as well as exceptional customer service orientation
• Ability to drive clarity of purpose and goals during release and planning activities
• Excellent organizational skills including ability to prioritize tasks efficiently with high level of attention to detail
• Excited by the opportunity to continually improve processes within a large company
• Healthcare background/ Automobile background.
• Familiarity with major big data solutions and products available in the market.
• Proven ability to drive continuous
Read more
Bengaluru (Bangalore)
4 - 7 yrs
₹20L - ₹30L / yr
Spark
Hadoop
Big Data
Data engineering
PySpark
+5 more
Roles & Responsibilties
What will you do?
  • Deliver plugins for our Python-based ETL pipelines
  • Deliver Python microservices for provisioning and managing cloud infrastructure
  • Implement algorithms to analyse large data sets
  • Draft design documents that translate requirements into code
  • Effectively manage challenges associated with handling large volumes of data working to tight deadlines
  • Manage expectations with internal stakeholders and context-switch in a fast-paced environment
  • Thrive in an environment that uses AWS and Elasticsearch extensively
  • Keep abreast of technology and contribute to the engineering strategy
  • Champion best development practices and provide mentorship to others
What are we looking for?
  • First and foremost you are a Python developer, experienced with the Python Data stack
  • You love and care about data
  • Your code is an artistic manifest reflecting how elegant you are in what you do
  • You feel sparks of joy when a new abstraction or pattern arises from your code
  • You support the manifests DRY (Don’t Repeat Yourself) and KISS (Keep It Short and Simple)
  • You are a continuous learner
  • You have a natural willingness to automate tasks
  • You have critical thinking and an eye for detail
  • Excellent ability and experience of working to tight deadlines
  • Sharp analytical and problem-solving skills
  • Strong sense of ownership and accountability for your work and delivery
  • Excellent written and oral communication skills
  • Mature collaboration and mentoring abilities
  • We are keen to know your digital footprint (community talks, blog posts, certifications, courses you have participated in or you are keen to, your personal projects as well as any kind of contributions to the open-source communities if any)
Nice to have:
  • Delivering complex software, ideally in a FinTech setting
  • Experience with CI/CD tools such as Jenkins, CircleCI
  • Experience with code versioning (git / mercurial / subversion)
Read more
CarWale

at CarWale

5 recruiters
Vanita Acharya
Posted by Vanita Acharya
Navi Mumbai, Mumbai
3 - 5 yrs
₹10L - ₹15L / yr
skill iconData Science
Data Scientist
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
+1 more

About CarWale: CarWale's mission is to bring delight in car buying, we offer a bouquet of reliable tools and services to help car consumers decide on buying the right car, at the right price and from the right partner. CarWale has always strived to serve car buyers and owners in the most comprehensive and convenient way possible. We provide a platform where car buyers and owners can research, buy, sell and come together to discuss and talk about their cars.We aim to empower Indian consumers to make informed car buying and ownership decisions with exhaustive and un-biased information on cars through our expert reviews, owner reviews, detailed specifications and comparisons. We understand that a car is by and large the second-most expensive asset a consumer associates his lifestyle with! Together with CarTrade & BikeWale, we are the market leaders in the personal mobility media space.About the Team:We are a bunch of enthusiastic analysts assisting all business functions with their data needs. We deal with huge but diverse datasets to find relationships, patterns and meaningful insights. Our goal is to help drive growth across the organization by creating a data-driven culture.

We are looking for an experienced Data Scientist who likes to explore opportunities and know their way around data to build world class solutions making a real impact on the business. 

 

Skills / Requirements –

  • 3-5 years of experience working on Data Science projects
  • Experience doing statistical modelling of big data sets
  • Expert in Python, R language with deep knowledge of ML packages
  • Expert in fetching data from SQL
  • Ability to present and explain data to management
  • Knowledge of AWS would be beneficial
  • Demonstrate Structural and Analytical thinking
  • Ability to structure and execute data science project end to end

 

Education –

Bachelor’s degree in a quantitative field (Maths, Statistics, Computer Science). Masters will be preferred.

 

Read more
Dotball Interactive Private Limited
Veena K V
Posted by Veena K V
Bengaluru (Bangalore)
1 - 2 yrs
₹3L - ₹6L / yr
skill iconPython
skill iconData Analytics
SQL server
- Excellent skill in SQL and SQL tools - Fantasy Gamer - Excited to dive deep into data and derive insights - Decent communication and writing skills - Interested in Automated solutions - Startup and Growth Mindset - Analytical Data mining thinking - Python - Good programmer
Read more
css corp
Agency job
via staff hire solutions by Purvaja Patidar
Bengaluru (Bangalore)
1 - 3 yrs
₹10L - ₹11L / yr
skill iconData Science
skill iconMachine Learning (ML)
Natural Language Processing (NLP)
Computer Vision
recommendation algorithm
+9 more
Design and implement cloud solutions, build MLOps on cloud (GCP) Build CI/CD pipelines orchestration by GitLab CI, GitHub Actions, Circle CI, Airflow or similar tools; Data science model review, run the code refactoring and optimization, containerization, deployment, versioning, and monitoring of its quality. Data science models testing, validation and test automation. Communicate with a team of data scientists, data engineers and architects, and document the processes. Required Qualifications: Ability to design and implement cloud solutions and ability to build MLOps pipelines on cloud solutions (GCP) Experience with MLOps Frameworks like Kubeflow, MLFlow, DataRobot, Airflow etc., experience with Docker and Kubernetes, OpenShift. Programming languages like Python, Go, Ruby or Bash, a good understanding of Linux, and knowledge of frameworks such as sci-kit-learn, Keras, PyTorch, Tensorflow, etc. Ability to understand tools used by data scientists and experience with software development and test automation. Fluent in English, good communication skills and ability to work in a team. Desired Qualifications: Bachelor’s degree in Computer Science or Software Engineering Experience in using GCP services. Good to have Google Cloud Certification
Read more
Hammoq

at Hammoq

1 recruiter
Nikitha Muthuswamy
Posted by Nikitha Muthuswamy
Remote only
3 - 6 yrs
₹8L - ₹9L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
skill iconPython
SQL
skill iconJava
+4 more
Roles and Responsibilities

● Working on an awesome AI product for the eCommerce domain.
● Build the next-generation information extraction, computer vision product powered
by state-of-the-art AI and Deep Learning techniques.
● Work with an international top-notch engineering team with full commitment to
Machine Learning development.

Desired Candidate Profile

● Passionate about search & AI technologies. Open to collaborating with colleagues &
external contributors.
● Good understanding of the mainstream deep learning models from multiple domains:
computer vision, NLP, reinforcement learning, model optimization, etc.
● Hands-on experience on deep learning frameworks, e.g. Tensorflow, Pytorch, MXNet,
BERT. Able to implement the latest DL model using existing API, open-source libraries
in a short time.
● Hands-on experience with the Cloud-Native techniques. Good understanding of web
services and modern software technologies.
● Maintained/contributed machine learning projects, familiar with the agile software
development process, CICD workflow, ticket management, code-review, version
control, etc.
● Skilled in the following programming languages: Python 3.
● Good English skills especially for writing and reading documentation
Read more
Networking & Cybersecurity Solutions
Bengaluru (Bangalore)
4 - 16 yrs
₹40L - ₹60L / yr
Spark
Apache Kafka
Data Engineer
Hadoop
Big Data
+3 more
  • Developing telemetry software to connect Junos devices to the cloud
  • Fast prototyping and laying the SW foundation for product solutions
  • Moving prototype solutions to a production cloud multitenant SaaS solution
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources
  • Build analytics tools that utilize the data pipeline to provide significant insights into customer acquisition, operational efficiency and other key business performance metrics.
  • Work with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
  • Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
  • Work with data and analytics specialists to strive for greater functionality in our data systems.

Qualification and Desired Experiences

  • Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • 5+ years experiences building data pipelines for data science-driven solutions
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Good team worker with excellent interpersonal skills written, verbal and presentation
  • Create and maintain optimal data pipeline architecture,
  • Assemble large, sophisticated data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Experience with AWS, S3, Flink, Spark, Kafka, Elastic Search
  • Previous work in a start-up environment
  • 3+ years experiences building data pipelines for data science-driven solutions
  • Master in Computer Science, Electrical Engineering, Statistics, Applied Math or equivalent fields with strong mathematical background
  • We are looking for a candidate with 9+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience with data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • Experience with AWS cloud services: EC2, EMR, RDS, Redshift
  • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
  • Strong hands-on coding skills (preferably in Python) processing large-scale data set and developing machine learning model
  • Familiar with one or more machine learning or statistical modeling tools such as Numpy, ScikitLearn, MLlib, Tensorflow
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and find opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets.
  • Build processes supporting data transformation, data structures, metadata, dependency and workload management.
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Proven understanding of message queuing, stream processing, and highly scalable ‘big data’ data stores.
  • Strong project management and interpersonal skills.
  • Experience supporting and working with multi-functional teams in a multidimensional environment.
Read more
Yottaasys AI LLC

at Yottaasys AI LLC

5 recruiters
Dinesh Krishnan
Posted by Dinesh Krishnan
Bengaluru (Bangalore), Singapore
2 - 5 yrs
₹9L - ₹20L / yr
skill iconData Science
skill iconDeep Learning
skill iconR Programming
skill iconPython
skill iconMachine Learning (ML)
+2 more
We are a US Headquartered Product Company looking to Hire a few Passionate Deep Learning and Computer Vision Team Players with 2-5 years of experience! If you are any of these:
1. Expert in deep learning and machine learning techniques,
2. Extremely Good in image/video processing,
3. Have a Good understanding of Linear algebra, Optimization techniques, Statistics and pattern recognition.
Then u r the right fit for this position.
Read more
Dunzo
Agency job
via zyoin by Pratibha Yadav
Bengaluru (Bangalore)
8 - 12 yrs
₹50L - ₹90L / yr
skill iconData Science
skill iconMachine Learning (ML)
NumPy
skill iconR Programming
skill iconPython
  • B.Tech/MTech from tier 1 institution
  • 8+years of experience in machine learning techniques like logistic regression, random forest, boosting, trees, neural networks, etc.
  • Showcased experience with Python, SQL and proficiency in Scikit Learn, Pandas, NumPy, Keras and TensorFlow/pytorch
  • Experience of working with Qlik sense or Tableau is a plus
Experience of working in a product company is a plus
Read more
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Find more jobs
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort