7+ Big data Jobs in Ahmedabad | Big data Job openings in Ahmedabad
Apply to 7+ Big data Jobs in Ahmedabad on CutShort.io. Explore the latest Big data Job opportunities across top companies like Google, Amazon & Adobe.
consulting & implementation services in the area of Oil & Gas, Mining and Manufacturing Industry
- Data Engineer
Required skill set: AWS GLUE, AWS LAMBDA, AWS SNS/SQS, AWS ATHENA, SPARK, SNOWFLAKE, PYTHON
Mandatory Requirements
- Experience in AWS Glue
- Experience in Apache Parquet
- Proficient in AWS S3 and data lake
- Knowledge of Snowflake
- Understanding of file-based ingestion best practices.
- Scripting language - Python & pyspark
CORE RESPONSIBILITIES
- Create and manage cloud resources in AWS
- Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
- Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
- Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
- Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
- Define process improvement opportunities to optimize data collection, insights and displays.
- Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
- Identify and interpret trends and patterns from complex data sets
- Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
- Key participant in regular Scrum ceremonies with the agile teams
- Proficient at developing queries, writing reports and presenting findings
- Mentor junior members and bring best industry practices
QUALIFICATIONS
- 5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
- Strong background in math, statistics, computer science, data science or related discipline
- Advanced knowledge one of language: Java, Scala, Python, C#
- Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
- Proficient with
- Data mining/programming tools (e.g. SAS, SQL, R, Python)
- Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
- Data visualization (e.g. Tableau, Looker, MicroStrategy)
- Comfortable learning about and deploying new technologies and tools.
- Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
- Good written and oral communication skills and ability to present results to non-technical audiences
- Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
- AWS certification
- Spark Streaming
- Kafka Streaming / Kafka Connect
- ELK Stack
- Cassandra / MongoDB
- CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
Job Description
Mandatory Requirements
-
Experience in AWS Glue
-
Experience in Apache Parquet
-
Proficient in AWS S3 and data lake
-
Knowledge of Snowflake
-
Understanding of file-based ingestion best practices.
-
Scripting language - Python & pyspark
CORE RESPONSIBILITIES
-
Create and manage cloud resources in AWS
-
Data ingestion from different data sources which exposes data using different technologies, such as: RDBMS, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies
-
Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform
-
Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations
-
Develop an infrastructure to collect, transform, combine and publish/distribute customer data.
-
Define process improvement opportunities to optimize data collection, insights and displays.
-
Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible
-
Identify and interpret trends and patterns from complex data sets
-
Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders.
-
Key participant in regular Scrum ceremonies with the agile teams
-
Proficient at developing queries, writing reports and presenting findings
-
Mentor junior members and bring best industry practices.
QUALIFICATIONS
-
5-7+ years’ experience as data engineer in consumer finance or equivalent industry (consumer loans, collections, servicing, optional product, and insurance sales)
-
Strong background in math, statistics, computer science, data science or related discipline
-
Advanced knowledge one of language: Java, Scala, Python, C#
-
Production experience with: HDFS, YARN, Hive, Spark, Kafka, Oozie / Airflow, Amazon Web Services (AWS), Docker / Kubernetes, Snowflake
-
Proficient with
-
Data mining/programming tools (e.g. SAS, SQL, R, Python)
-
Database technologies (e.g. PostgreSQL, Redshift, Snowflake. and Greenplum)
-
Data visualization (e.g. Tableau, Looker, MicroStrategy)
-
Comfortable learning about and deploying new technologies and tools.
-
Organizational skills and the ability to handle multiple projects and priorities simultaneously and meet established deadlines.
-
Good written and oral communication skills and ability to present results to non-technical audiences
-
Knowledge of business intelligence and analytical tools, technologies and techniques.
Familiarity and experience in the following is a plus:
-
AWS certification
-
Spark Streaming
-
Kafka Streaming / Kafka Connect
-
ELK Stack
-
Cassandra / MongoDB
-
CI/CD: Jenkins, GitLab, Jira, Confluence other related tools
1. Communicate with the clients and understand their business requirements.
2. Build, train, and manage your own team of junior data engineers.
3. Assemble large, complex data sets that meet the client’s business requirements.
4. Identify, design and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
5. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, including the cloud.
6. Assist clients with data-related technical issues and support their data infrastructure requirements.
7. Work with data scientists and analytics experts to strive for greater functionality.
Skills required: (experience with at least most of these)
1. Experience with Big Data tools-Hadoop, Spark, Apache Beam, Kafka etc.
2. Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.
3. Experience in ETL and Data Warehousing.
4. Experience and firm understanding of relational and non-relational databases like MySQL, MS SQL Server, Postgres, MongoDB, Cassandra etc.
5. Experience with cloud platforms like AWS, GCP and Azure.
6. Experience with workflow management using tools like Apache Airflow.
Leading IT MNC Company
Rules & Responsibilities:
Technical Skills:
- .Net– Net, C#, .Net core, MVC, Framework, Web API, Web Services, Micro Service and SQL
- Azure – Azure Cloud, SaaS, PaaS, IaaS, Azure Relational, and No-SQL Database, Big Data Services
Responsibilities
- Good understanding of and experience in working on Microsoft Azure (IAAS/PAAS/SAAS)
- Ability to architect, design, and implement cloud-based solutions
- Proven track record of designing and implement the IoT-based solutions/Big Data solutions/applications to the Azure cloud platform.
- Experience in building .Net-based enterprise distributed solutions in Windows and Linux.
- Experience in using CI and CD tools. Jenkins/ Azure pipeline and Terraform. Experience in using another tooling such as Ansible, CloudFormation, etc.
- Good understanding of HA/DR Setups in Cloud
- Experience and working knowledge of Virtualization, Networking, Data Center, and Security
- Deep hands-on experience in the design, development, and deployment of business software at scale.
- Strong hands-on experience in Azure Cloud Platform
- Experience in Kubernetes, Docker, and other cloud deployment, container technologies
- Experience / knowledge of other cloud offerings (e.g. AWS, GCP) will be added advantage
- Experience with monitoring tools like Prometheus, Grafana, Datadog, etc.
2. Hands-on experience with Hibernate/JPA
3. Added advantage if known MicroServices and Design Patterns
4. Experience working with tools like Git, Jenkins, Maven
5. Working knowledge with Oracle or MySQL Database
6. Strong agile/scrum development experience
at Kalibre global konnects
Job Title:- Assistant Manager - Business Analytics
Age: Max. 35years.
Working Days:- 6 days a week
Location:- Ahmedabad, Gujarat
Monthly CTC:- Salary will commensurate with experience.
Educational Qualification:- The candidate should have bachelor’s degree in
IT/Engineering from any recognized university..
Experience:- 2+ years of work experience in AI/ML/business analytics with Institute of
repute or corporate.
Required Technical Skills:-
A fair bit of understanding of Business Analytics, Data Science, Visualization/Big Data
etc.
Basic knowledge of different analytical tools such as R Programming, Python etc.
Hands on experience in Moodle Development (desirable).
Good knowledge in customizing Moodle functionalities and developing custom themes
for Moodle (desirable).
An analytical mind-set who enjoy helping participants solving problems and turning data
into useful actionable information
Key Responsibilities include:-
Understand the tools and technologies specific to e-learning and blended learning
development and delivery.
Provide academic as well as technical assistance to the faculty members teaching the
analytics courses.
Working closely with the Instructors and assisting them in programming, coding, testing
etc.
Preparing the lab study material in coordination with the Instructors and assisting
students in programming lab and solving their doubts.
Works on assignments dealing with the routine and daily operation, use, and
configuration of the Learning Management System (LMS).
Administers learning technology platforms including the creation of courses,
certifications and other e-learning programs on the platforms.
Responsible to provide support within the eLearning department, provide technical
support to our external clients, and administrate the Learning Management System.
Creates user groups and assigns content and assessments to the right target audience,
runs reports and creates learning evens in the LMS system.
Performs regular maintenance of LMS database, including adding or removing courses.
Uploads, tests, deploys and maintains all training materials/learning assets hosted in the
LMS.
Ability to Multi-task.
Ability to demonstrate accuracy on detailed oriented and repetitive job assignments.
Responsible and reliable
The programmer should be proficient in python and should be able to work totally independently. Should also have skill to work with databases and have strong capability to understand how to fetch data from various sources, organise the data and identify useful information through efficient code.
Familiarity with Python
Some examples of work: