Cutshort logo
MNC logo
Data Engineer- Spark, SQL, Scala, Azure
at MNC
Data Engineer- Spark, SQL, Scala, Azure
MNC's logo

Data Engineer- Spark, SQL, Scala, Azure

at MNC

Agency job
3 - 9 yrs
₹3L - ₹17L / yr
Bengaluru (Bangalore)
Skills
skill iconScala
Spark
Data Warehouse (DWH)
Business Intelligence (BI)
Apache Spark
SQL
azure
Dear All,
we are looking for candidates who have good experiance with
BI/DW Experience of 3 - 6 years with Spark, Scala, SQL expertise
and Azure.
Azure background is needed.
     * Spark hands on : Must have
     * Scala hands on : Must have
     * SQL expertise : Expert
     * Azure background : Must have
     * Python hands on : Good to have
     * ADF, Data Bricks: Good to have
     * Should be able to communicate effectively and deliver technology
implementation end to end
Looking for candidates who can join 15 to 30 Days and who will avaailable immeiate.


Regards
Gayatri P
Fragma Data Systems
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About MNC

Founded
Type
Size
Stage
About
iSquare Soft Bangalore is India Leading Recruitment Solution provider, with experience of working with more than 200 IT based clients.
Read more
Company social profiles
linkedintwitterfacebook

Similar jobs

Delivery Solutions
Chennai
5 - 13 yrs
₹12L - ₹28L / yr
skill iconAngularJS (1.x)
skill iconAngular (2+)
skill iconReact.js
skill iconNodeJS (Node.js)
skill iconPython
+8 more

About UPS:


Moving our world forward by delivering what matters! UPS is a company with a proud past and an even brighter future. Our values define us. Our culture differentiates us. Our strategy drives us. At UPS we are customer first, people led and innovation driven. UPS’s India based Technology Development Centers will bring UPS one step closer to creating a global technology workforce that will help accelerate our digital journey and help us engineer technology solutions that drastically improve our competitive advantage in the field of Logistics.

‘Future You’ grows as a visible and valued Technology professional with UPS, driving us towards an exciting tomorrow. As a global Technology organization we can put serious resources behind your development. If you are solutions orientated, UPS Technology is the place for you. ‘Future You’ delivers ground-breaking solutions to some of the biggest logistics challenges around the globe. You’ll take technology to unimaginable places and really make a difference for UPS and our customers.


Job Summary:

This position participates in the support of batch and real-time data pipelines utilizing various data analytics processing frameworks in support of data science practices for Marketing and Finance business units. This position supports the integration of data from various data sources, as well as performs extract, transform, load (ETL) data conversions, and facilitates data cleansing and enrichment. This position performs full systems life cycle management activities, such as analysis, technical requirements, design, coding, testing, implementation of systems and applications software. This position participates and contributes to synthesizing disparate data sources to support reusable and reproducible data assets.


RESPONSIBILITIES

• Supervises and supports data engineering projects and builds solutions by leveraging a strong foundational knowledge in software/application development. He/she is hands on.

• Develops and delivers data engineering documentation.

• Gathers requirements, defines the scope, and performs the integration of data for data engineering projects.

• Recommends analytic reporting products/tools and supports the adoption of emerging technology.

• Performs data engineering maintenance and support.

• Provides the implementation strategy and executes backup, recovery, and technology solutions to perform analysis.

• Performs ETL tool capabilities with the ability to pull data from various sources and perform a load of the transformed data into a database or business intelligence platform.

• Codes using programming language used for statistical analysis and modeling such as Python/Spark


REQUIRED QUALIFICATIONS:

• Literate in the programming languages used for statistical modeling and analysis, data warehousing and Cloud solutions, and building data pipelines.

• Proficient in developing notebooks in Data bricks using Python and Spark and Spark SQL.

• Strong understanding of a cloud services platform (e.g., GCP, or AZURE, or AWS) and all the data life cycle stages. Azure is preferred

• Proficient in using Azure Data Factory and other Azure features such as LogicApps.

• Preferred to have knowledge of Delta lake, Lakehouse and Unity Catalog concepts.

• Strong understanding of cloud-based data lake systems and data warehousing solutions.

• Has used AGILE concepts for development, including KANBAN and Scrums

• Strong understanding of the data interconnections between organizations’ operational and business functions.

• Strong understanding of the data life cycle stages - data collection, transformation, analysis, storing the data securely, providing data accessibility

• Strong understanding of the data environment to ensure that it can scale for the following demands: Throughput of data, increasing data pipeline throughput, analyzing large amounts of data, Real-time predictions, insights and customer feedback, data security, data regulations, and compliance.

• Strong knowledge of algorithms and data structures, as well as data filtering and data optimization.

• Strong understanding of analytic reporting technologies and environments (e.g., Power BI, Looker, Qlik, etc.)

• Understanding of distributed systems and the underlying business problem being addressed, as well as guides team members on how their work will assist by performing data analysis and presenting findings to the stakeholders. •

REQUIRED SKILLS:

 3 years of experience with Databricks, Apache Spark, Python, and SQL


Preferred SKILLS:

 DeltaLake Unity Catalog, R, Scala, Azure Logic Apps, Cloud Services Platform (e.g., GCP, or AZURE, or AWS), and AGILE concepts.

Read more
Remote only
2 - 3 yrs
₹5L - ₹7L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+7 more

About the Role:


We are on the lookout for a dynamic Marketing Automation and Data Analytics Specialist, someone who is not only adept in marketing automation/operation but also possesses a keen expertise in data analytics and visualization. This role is tailor-made for individuals who are proficient with tools like Eloqua, Marketo, Salesforce Pardot, and Power BI.


As our Marketing Automation and Data Analytics Specialist, your responsibilities will span across managing and optimizing marketing automation systems and overseeing the migration and enhancement of data systems and dashboards. You will play a pivotal role in blending marketing strategies with data analytics, ensuring the creation of visually appealing and effective reports and dashboards. Collaborating closely with marketing teams, you will help in making data-driven decisions that propel the company forward.


We believe in fostering an environment where initiative and self-direction are valued. While you will receive the necessary guidance and support, the autonomy of your role is a testament to our trust in your abilities and professionalism.


Responsibilities:


  • Manage and optimize marketing automation systems (Eloqua, Marketo, Salesforce Pardot) to map and improve business processes.
  • Develop, audit, and enhance data systems, ensuring accuracy and efficiency in marketing efforts.
  • Build and migrate interactive, visually appealing dashboards and reports.
  • Develop and maintain reporting and analytics for marketing efforts, database health, lead scoring, and dashboard performance.
  • Handle technical aspects of key marketing systems and integrate them with data visualization tools like Power BI.
  • Review and improve existing SQL data sources for effective integration and analytics.
  • Collaborate closely with sales, marketing, and analytics teams to define requirements, establish best practices, and ensure successful outcomes.
  • Ensure all marketing data, dashboards, and reports are accurate and effectively meet business needs.


Ideal Candidate Qualities:


  • Strong commitment to the role with a focus on long-term growth.
  • Exceptional communication and collaboration skills across diverse teams.
  • High degree of autonomy and ability to work effectively without micromanagement.
  • Strong attention to detail and organization skills.


Qualifications:


  • Hands-on experience with marketing automation systems and data analytics tools like Eloqua, Marketo, Salesforce Pardot and Power Bi .
  • Proven experience in data visualization and dashboard creation using Power BI.
  • Experience with SQL, including building and optimizing queries.
  • Knowledge of ABM and Intent Signaling technologies is a plus.
  • Outstanding analytical skills with an ability to work with complex datasets.
  • Familiarity with data collection, cleaning, and transformation processes.


Benefits:


  • Work-from-home flexibility.
  • Career advancement opportunities and professional development support.
  • Supportive and collaborative team environment.


Hiring Process:


The hiring process at InEvolution is thoughtfully designed to ensure alignment between your career goals and our company's objectives. The process will include:


  • Initial Phone Screening: A brief conversation to discuss your background and understand your career aspirations.
  • Team Introduction Interview: Candidates who excel in the first round will engage in a deeper discussion with our team, providing insights into our work culture and the specificities of the role.
  • Technical Assessment: In the final round, you will meet our Technical Director for an in-depth conversation about your technical skills and how these align with the demands of the role.


Read more
Perfios
Perfios
Agency job
via Seven N Half by Susmitha Goddindla
Bengaluru (Bangalore)
4 - 6 yrs
₹4L - ₹15L / yr
SQL
ETL tool
python developer
skill iconMongoDB
skill iconData Science
+15 more
Job Description
1. ROLE AND RESPONSIBILITIES
1.1. Implement next generation intelligent data platform solutions that help build high performance distributed systems.
1.2. Proactively diagnose problems and envisage long term life of the product focusing on reusable, extensible components.
1.3. Ensure agile delivery processes.
1.4. Work collaboratively with stake holders including product and engineering teams.
1.5. Build best-practices in the engineering team.
2. PRIMARY SKILL REQUIRED
2.1. Having a 2-6 years of core software product development experience.
2.2. Experience of working with data-intensive projects, with a variety of technology stacks including different programming languages (Java,
Python, Scala)
2.3. Experience in building infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data
sources to support other teams to run pipelines/jobs/reports etc.
2.4. Experience in Open-source stack
2.5. Experiences of working with RDBMS databases, NoSQL Databases
2.6. Knowledge of enterprise data lakes, data analytics, reporting, in-memory data handling, etc.
2.7. Have core computer science academic background
2.8. Aspire to continue to pursue career in technical stream
3. Optional Skill Required:
3.1. Understanding of Big Data technologies and Machine learning/Deep learning
3.2. Understanding of diverse set of databases like MongoDB, Cassandra, Redshift, Postgres, etc.
3.3. Understanding of Cloud Platform: AWS, Azure, GCP, etc.
3.4. Experience in BFSI domain is a plus.
4. PREFERRED SKILLS
4.1. A Startup mentality: comfort with ambiguity, a willingness to test, learn and improve rapidl
Read more
Talent500
Talent500
Agency job
via Talent500 by ANSR by Raghu R
Bengaluru (Bangalore)
1 - 10 yrs
₹5L - ₹30L / yr
skill iconPython
ETL
SQL
SQL Server Reporting Services (SSRS)
Data Warehouse (DWH)
+6 more

A proficient, independent contributor that assists in technical design, development, implementation, and support of data pipelines; beginning to invest in less-experienced engineers.

Responsibilities:

- Design, Create and maintain on premise and cloud based data integration pipelines. 
- Assemble large, complex data sets that meet functional/non functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
- Create data pipelines to enable BI, Analytics and Data Science teams that assist them in building and optimizing their systems
- Assists in the onboarding, training and development of team members.
- Reviews code changes and pull requests for standardization and best practices
- Evolve existing development to be automated, scalable, resilient, self-serve platforms
- Assist the team in the design and requirements gathering for technical and non technical work to drive the direction of projects

 

Technical & Business Expertise:

-Hands on integration experience in SSIS/Mulesoft
- Hands on experience Azure Synapse
- Proven advanced level of writing database experience in SQL Server
- Proven advanced level of understanding about Data Lake
- Proven intermediate level of writing Python or similar programming language
- Intermediate understanding of Cloud Platforms (GCP) 
- Intermediate understanding of Data Warehousing
- Advanced Understanding of Source Control (Github)

Read more
Softobiz Technologies Private limited
at Softobiz Technologies Private limited
2 candid answers
1 recruiter
Swati Sharma
Posted by Swati Sharma
Hyderabad
5 - 13 yrs
₹10L - ₹25L / yr
azure data factory
SQL server
SSIS
SQL Server Integration Services (SSIS)
Data Warehouse (DWH)
+7 more

Responsibilities


  • Design and implement Azure BI infrastructure, ensure overall quality of delivered solution 
  • Develop analytical & reporting tools, promote and drive adoption of developed BI solutions 
  • Actively participate in BI community 
  • Establish and enforce technical standards and documentation 
  • Participate in daily scrums  
  • Record progress daily in assigned Devops items 


Ideal Candidates should have


  • 5 + years of experience in a similar senior business intelligence development position 
  • To be successful in the role you will require a high level of expertise across all facets of the Microsoft BI stack and prior experience in designing and developing well-performing data warehouse solutions 
  • Demonstrated experience using development tools such as Azure SQL database, Azure Data Factory, Azure Data Lake, Azure Synapse, and Azure DevOps. 
  • Experience with development methodologies including Agile, DevOps, and CICD patterns 
  • Strong oral and written communication skills in English 
  • Ability and willingness to learn quickly and continuously 
  • Bachelor's Degree in computer science 


Read more
Hammoq
at Hammoq
1 recruiter
Nikitha Muthuswamy
Posted by Nikitha Muthuswamy
Remote only
3 - 6 yrs
₹8L - ₹9L / yr
skill iconMachine Learning (ML)
skill iconDeep Learning
skill iconPython
SQL
skill iconJava
+4 more
Roles and Responsibilities

● Working on an awesome AI product for the eCommerce domain.
● Build the next-generation information extraction, computer vision product powered
by state-of-the-art AI and Deep Learning techniques.
● Work with an international top-notch engineering team with full commitment to
Machine Learning development.

Desired Candidate Profile

● Passionate about search & AI technologies. Open to collaborating with colleagues &
external contributors.
● Good understanding of the mainstream deep learning models from multiple domains:
computer vision, NLP, reinforcement learning, model optimization, etc.
● Hands-on experience on deep learning frameworks, e.g. Tensorflow, Pytorch, MXNet,
BERT. Able to implement the latest DL model using existing API, open-source libraries
in a short time.
● Hands-on experience with the Cloud-Native techniques. Good understanding of web
services and modern software technologies.
● Maintained/contributed machine learning projects, familiar with the agile software
development process, CICD workflow, ticket management, code-review, version
control, etc.
● Skilled in the following programming languages: Python 3.
● Good English skills especially for writing and reading documentation
Read more
Banyan Data Services
at Banyan Data Services
1 recruiter
Sathish Kumar
Posted by Sathish Kumar
Bengaluru (Bangalore)
3 - 15 yrs
₹6L - ₹20L / yr
skill iconData Science
Data Scientist
skill iconMongoDB
skill iconJava
Big Data
+14 more

Senior Big Data Engineer 

Note:   Notice Period : 45 days 

Banyan Data Services (BDS) is a US-based data-focused Company that specializes in comprehensive data solutions and services, headquartered in San Jose, California, USA. 

 

We are looking for a Senior Hadoop Bigdata Engineer who has expertise in solving complex data problems across a big data platform. You will be a part of our development team based out of Bangalore. This team focuses on the most innovative and emerging data infrastructure software and services to support highly scalable and available infrastructure. 

 

It's a once-in-a-lifetime opportunity to join our rocket ship startup run by a world-class executive team. We are looking for candidates that aspire to be a part of the cutting-edge solutions and services we offer that address next-gen data evolution challenges. 

 

 

Key Qualifications

 

·   5+ years of experience working with Java and Spring technologies

· At least 3 years of programming experience working with Spark on big data; including experience with data profiling and building transformations

· Knowledge of microservices architecture is plus 

· Experience with any NoSQL databases such as HBase, MongoDB, or Cassandra

· Experience with Kafka or any streaming tools

· Knowledge of Scala would be preferable

· Experience with agile application development 

· Exposure of any Cloud Technologies including containers and Kubernetes 

· Demonstrated experience of performing DevOps for platforms 

· Strong Skillsets in Data Structures & Algorithm in using efficient way of code complexity

· Exposure to Graph databases

· Passion for learning new technologies and the ability to do so quickly 

· A Bachelor's degree in a computer-related field or equivalent professional experience is required

 

Key Responsibilities

 

· Scope and deliver solutions with the ability to design solutions independently based on high-level architecture

· Design and develop the big data-focused micro-Services

· Involve in big data infrastructure, distributed systems, data modeling, and query processing

· Build software with cutting-edge technologies on cloud

· Willing to learn new technologies and research-orientated projects 

· Proven interpersonal skills while contributing to team effort by accomplishing related results as needed 

Read more
Graphene Services Pte Ltd
Swetha Seshadri
Posted by Swetha Seshadri
Bengaluru (Bangalore)
2 - 5 yrs
Best in industry
skill iconPython
MySQL
SQL
NOSQL Databases
PowerBI
+2 more

About Graphene  

Graphene is a Singapore Head quartered AI company which has been recognized as Singapore’s Best  

Start Up By Switzerland’s Seedstarsworld, and also been awarded as best AI platform for healthcare in Vivatech Paris. Graphene India is also a member of the exclusive NASSCOM Deeptech club. We are developing an AI plaform which is disrupting and replacing traditional Market Research with unbiased insights with a focus on healthcare, consumer goods and financial services.  

  

Graphene was founded by Corporate leaders from Microsoft and P&G, and works closely with the Singapore Government & Universities in creating cutting edge technology which is gaining traction with many Fortune 500 companies in India, Asia and USA.  

Graphene’s culture is grounded in delivering customer delight by recruiting high potential talent and providing an intense learning and collaborative atmosphere, with many ex-employees now hired by large companies across the world.  

  

Graphene has a 6-year track record of delivering financially sustainable growth and is one of the rare start-ups which is self-funded and is yet profitable and debt free. We have already created a strong bench strength of Singaporean leaders and are recruiting and grooming more talent with a focus on our US expansion.   

  

Job title: - Data Analyst 

Job Description  

Data Analyst responsible for storage, data enrichment, data transformation, data gathering based on data requests, testing and maintaining data pipelines. 

Responsibilities and Duties  

  • Managing end to end data pipeline from data source to visualization layer 
  • Ensure data integrity; Ability to pre-empt data errors 
  • Organized managing and storage of data 
  • Provide quality assurance of data, working with quality assurance analysts if necessary. 
  • Commissioning and decommissioning of data sets. 
  • Processing confidential data and information according to guidelines. 
  • Helping develop reports and analysis. 
  • Troubleshooting the reporting database environment and reports. 
  • Managing and designing the reporting environment, including data sources, security, and metadata. 
  • Supporting the data warehouse in identifying and revising reporting requirements. 
  • Supporting initiatives for data integrity and normalization. 
  • Evaluating changes and updates to source production systems. 
  • Training end-users on new reports and dashboards. 
  • Initiate data gathering based on data requirements 
  • Analyse the raw data to check if the requirement is satisfied 

 

Qualifications and Skills   

  

  • Technologies required: Python, SQL/ No-SQL database(CosmosDB)     
  • Experience required 2 – 5 Years. Experience in Data Analysis using Python 

  Understanding of software development life cycle   

  • Plan, coordinate, develop, test and support data pipelines, document, support for reporting dashboards (PowerBI) 
  • Automation steps needed to transform and enrich data.   
  • Communicate issues, risks, and concerns proactively to management. Document the process thoroughly to allow peers to assist with support as needed.   
  • Excellent verbal and written communication skills   
Read more
Customer-centric company and deliver data products ( A1)
Customer-centric company and deliver data products ( A1)
Agency job
via Multi Recruit by Santhosh Kumar KR
Bengaluru (Bangalore)
8 - 16 yrs
₹25L - ₹30L / yr
Data engineering
Data engineer
PowerBI
Data modeling
Data validation
+2 more
  • Create, maintain and automate datasets and insightful dashboards to track core metrics and extract business insights
  • Analyze large-scale structured and unstructured data to identify business opportunities and optimize features for Analytics.
Requirements:
  • 8+ years experience doing Business Intelligence and Analytics work
  • B.Tech/M.Tech in a technical field (Computer Science, Math, Statistics)
  • Strong knowledge in Data Design, Data Modelling, and Data Validation best practices
  • Proficient in data visualization, Preferably SSRS & Power BI
  • Fluency with writing advanced SQL code
  • Experience with SQL Server Administration and best practices
  • Possess BFSI, Fintech Domain Knowledge
  • Excellent interpersonal, cross-functional, communication, writing, and presentation skills
  • Comfortable working in a fast-paced environment with the ability to be a team player
  • Possess excellent Project Management and superior Team Management skills

Read more
Cemtics
at Cemtics
1 recruiter
Tapan Sahani
Posted by Tapan Sahani
Remote, NCR (Delhi | Gurgaon | Noida)
4 - 6 yrs
₹5L - ₹12L / yr
Big Data
Spark
Hadoop
SQL
skill iconPython
+1 more

JD:

Required Skills:

  • Intermediate to Expert level hands-on programming using one of programming language- Java or Python or Pyspark or Scala.
  • Strong practical knowledge of SQL.
    Hands on experience on Spark/SparkSQL
  • Data Structure and Algorithms
  • Hands-on experience as an individual contributor in Design, Development, Testing and Deployment of Big Data technologies based applications
  • Experience in Big Data application tools, such as Hadoop, MapReduce, Spark, etc
  • Experience on NoSQL Databases like HBase, etc
  • Experience with Linux OS environment (Shell script, AWK, SED)
  • Intermediate RDBMS skill, able to write SQL query with complex relation on top of big RDMS (100+ table)
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos