Cutshort logo
Telecom  Client logo
Big Data Developer / Lead / Architect
Big Data Developer / Lead / Architect
Telecom  Client's logo

Big Data Developer / Lead / Architect

Agency job
5 - 13 yrs
₹9L - ₹28L / yr
Chennai
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
Apache Hive
ETL
.NET
Microsoft Windows Azure
PowerBI
Apache Kafka
  • Demonstrable experience owning and developing big data solutions, using Hadoop, Hive/Hbase, Spark, Databricks, ETL/ELT for 5+ years

·       10+ years of Information Technology experience, preferably with Telecom / wireless service providers.

·       Experience in designing data solution following Agile practices (SAFe methodology); designing for testability, deployability and releaseability; rapid prototyping, data modeling, and decentralized innovation

  • DataOps mindset: allowing the architecture of a system to evolve continuously over time, while simultaneously supporting the needs of current users
  • Create and maintain Architectural Runway, and Non-Functional Requirements.
  • Design for Continuous Delivery Pipeline (CI/CD data pipeline) and enables Built-in Quality & Security from the start.

·       To be able to demonstrate an understanding and ideally use of, at least one recognised architecture framework or standard e.g. TOGAF, Zachman Architecture Framework etc

·       The ability to apply data, research, and professional judgment and experience to ensure our products are making the biggest difference to consumers

·       Demonstrated ability to work collaboratively

·       Excellent written, verbal and social skills - You will be interacting with all types of people (user experience designers, developers, managers, marketers, etc.)

·       Ability to work in a fast paced, multiple project environment on an independent basis and with minimal supervision

·       Technologies: .NET, AWS, Azure; Azure Synapse, Nifi, RDS, Apache Kafka, Azure Data bricks, Azure datalake storage, Power BI, Reporting Analytics, QlickView, SQL on-prem Datawarehouse; BSS, OSS & Enterprise Support Systems

Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About Telecom Client

Founded
Type
Size
Stage
About
N/A
Company social profiles
N/A

Similar jobs

Publicis Sapient
at Publicis Sapient
10 recruiters
Mohit Singh
Posted by Mohit Singh
Bengaluru (Bangalore), Gurugram, Pune, Hyderabad, Noida
4 - 10 yrs
Best in industry
PySpark
Data engineering
Big Data
Hadoop
Spark
+6 more

Publicis Sapient Overview:

The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution 

.

Job Summary:

As Senior Associate L1 in Data Engineering, you will do technical design, and implement components for data engineering solution. Utilize deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution

The role requires a hands-on technologist who has strong programming background like Java / Scala / Python, should have experience in Data Ingestion, Integration and data Wrangling, Computation, Analytics pipelines and exposure to Hadoop ecosystem components. Having hands-on knowledge on at least one of AWS, GCP, Azure cloud platforms will be preferable.


Role & Responsibilities:

Job Title: Senior Associate L1 – Data Engineering

Your role is focused on Design, Development and delivery of solutions involving:

• Data Ingestion, Integration and Transformation

• Data Storage and Computation Frameworks, Performance Optimizations

• Analytics & Visualizations

• Infrastructure & Cloud Computing

• Data Management Platforms

• Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time

• Build functionality for data analytics, search and aggregation


Experience Guidelines:

Mandatory Experience and Competencies:

# Competency

1.Overall 3.5+ years of IT experience with 1.5+ years in Data related technologies

2.Minimum 1.5 years of experience in Big Data technologies

3.Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end to end data pipeline. Working knowledge on real-time data pipelines is added advantage.

4.Strong experience in at least of the programming language Java, Scala, Python. Java preferable

5.Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDb, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery etc


Preferred Experience and Knowledge (Good to Have):

# Competency

1.Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands on experience

2.Knowledge on data governance processes (security, lineage, catalog) and tools like Collibra, Alation etc

3.Knowledge on distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing and Micro services architectures

4.Performance tuning and optimization of data pipelines

5.CI/CD – Infra provisioning on cloud, auto build & deployment pipelines, code quality

6.Working knowledge with data platform related services on at least 1 cloud platform, IAM and data security

7.Cloud data specialty and other related Big data technology certifications


Job Title: Senior Associate L1 – Data Engineering

Personal Attributes:

• Strong written and verbal communication skills

• Articulation skills

• Good team player

• Self-starter who requires minimal oversight

• Ability to prioritize and manage multiple tasks

• Process orientation and the ability to define and set up processes

Read more
Uber9 Business Process Services Pvt Ltd
Lakshmi J
Posted by Lakshmi J
Chennai
2 - 3 yrs
₹10L - ₹18L / yr
Spotfire
Qlikview
Tableau
PowerBI
Data Visualization
+3 more

About the company:

VakilSearch is a technology-driven platform, offering services that cover the legal needs of startups and established businesses. Some of our services include incorporation, government registrations & filings, accounting, documentation and annual compliances. In addition, we offer a wide range of services to individuals, such as property agreements and tax filings. Our mission is to provide one-click access to individuals and businesses for all their legal and professional needs.

 

You can learn more about us at https://vakilsearch.com/">vakilsearch.com.

About the role:

A successful data analyst needs to have a combination of technical as well leadership skills. A background in Mathematics, Statistics, Computer Science, Information Management can serve as a solid foundation to build your career as a data analyst at VakilSearch.

 

Why to join Vakilsearch:

 

  • Unlimited opportunities to grow
  • Flat hierarchy
  • Encouraging environment to unleash your out of box thinking skills

 

Responsibilities:

 

  • Preparing reports for the stakeholders and the management, enabling them to take important decisions based on various facts and trends.
  • Using automated tools to extract data from primary and secondary sources
  • Identify and recommend the right product metrics to be analysed and tracked for every feature/problem statement.
  • Using statistical tools to identify, analyze, and interpret patterns and trends in complex data sets that could be helpful for the diagnosis and prediction
  • Working with programmers, engineers, and management heads to identify process improvement opportunities, propose system modifications, and devise data governance strategies.

 

Required skills:

 

  • Bachelor’s degree from an accredited university or college in computer science or graduate from data science related program
  • Minimum of 0 - 2 years experience in analysing
Read more
Hyderabad
5 - 12 yrs
₹10L - ₹35L / yr
Analytics
Kubernetes
Apache Kafka
Data Analytics
Python
+3 more
  • 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
  • Strong industry expertise with containerization technologies including kubernetes, docker-compose
  • 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • Experience with scripting languages. Python experience highly desirable.
  • 2+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Demonstrated expertise of building cloud native applications
  • Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
  • Experience in API development using Swagger
  • Strong expertise with containerization technologies including kubernetes, docker-compose
  • Experience with cloud platform services such as AWS, Azure or GCP.
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins
Responsibilities
  • Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
  • Assist in dev ops operations
  • Develop data ingestion processes and ETLs
  • Design and Implement APIs
  • Assist in dev ops operations
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
ERUDITUS Executive Education
Payal Thakker
Posted by Payal Thakker
Remote only
3 - 10 yrs
₹15L - ₹45L / yr
Docker
Kubernetes
Apache Kafka
Apache Beam
Python
+2 more
Emeritus is committed to teaching the skills of the future by making high-quality education accessible and affordable to individuals, companies, and governments around the world. It does this by collaborating with more than 50 top-tier universities across the United States, Europe, Latin America, Southeast Asia, India and China. Emeritus’ short courses, degree programs, professional certificates, and senior executive programs help individuals learn new skills and transform their lives, companies and organizations. Its unique model of state-of-the-art technology, curriculum innovation, and hands-on instruction from senior faculty, mentors and coaches has educated more than 250,000 individuals across 80+ countries. Founded in 2015, Emeritus, part of Eruditus Group, has more than 2,000 employees globally and offices in Mumbai, New Delhi, Shanghai, Singapore, Palo Alto, Mexico City, New York, Boston, London, and Dubai. Following its $650 million Series E funding round in August 2021, the Company is valued at $3.2 billion, and is backed by Accel, SoftBank Vision Fund 2, the Chan Zuckerberg Initiative, Leeds Illuminate, Prosus Ventures, Sequoia Capital India, and Bertelsmann.
 
 
As a data engineer at Emeritus, you'll be working on a wide variety of data problems. At this fast paced company, you will frequently have to balance achieving an immediate goal with building sustainable and scalable architecture. The ideal candidate gets excited about streaming data, protocol buffers and microservices. They want to develop and maintain a centralized data platform that provides accurate, comprehensive, and timely data to a growing organization

Role & responsibilities:

    • Developing ETL pipelines for data replication
    • Analyze, query and manipulate data according to defined business rules and procedures
    • Manage very large-scale data from a multitude of sources into appropriate sets for research and development for data science and analysts across the company
    • Convert prototypes into production data engineering solutions through rigorous software engineering practices and modern deployment pipelines
    • Resolve internal and external data exceptions in timely and accurate manner
    • Improve multi-environment data flow quality, security, and performance 

Skills & qualifications:

    • Must have experience with:
    • virtualization, containers, and orchestration (Docker, Kubernetes)
    • creating log ingestion pipelines (Apache Beam) both batch and streaming processing (Pub/Sub, Kafka)
    • workflow orchestration tools (Argo, Airflow)
    • supporting machine learning models in production
    • Have a desire to continually keep up with advancements in data engineering practices
    • Strong Python programming and exploratory data analysis skills
    • Ability to work independently and with team members from different backgrounds
    • At least a bachelor's degree in an analytical or technical field. This could be applied mathematics, statistics, computer science, operations research, economics, etc. Higher education is welcome and encouraged.
    • 3+ years of work in software/data engineering.
    • Superior interpersonal, independent judgment, complex problem-solving skills
    • Global orientation, experience working across countries, regions and time zones
Emeritus provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
Read more
TVS Credit Services
at TVS Credit Services
2 recruiters
Vinodhkumar Panneerselvam
Posted by Vinodhkumar Panneerselvam
Chennai
4 - 10 yrs
₹10L - ₹20L / yr
Data Science
R Programming
Python
Machine Learning (ML)
Hadoop
+3 more
Job Description: Be responsible for scaling our analytics capability across all internal disciplines and guide our strategic direction in regards to analytics Organize and analyze large, diverse data sets across multiple platforms Identify key insights and leverage them to inform and influence product strategy Technical Interactions with vendor or partners in technical capacity for scope/ approach & deliverables. Develops proof of concept to prove or disprove validity of concept. Working with all parts of the business to identify analytical requirements and formalize an approach for reliable, relevant, accurate, efficientreporting on those requirements Designing and implementing advanced statistical testing for customized problem solving Deliver concise verbal and written explanations of analyses to senior management that elevate findings into strategic recommendations Desired Candidate Profile: MTech / BE / BTech / MSc in CS or Stats or Maths, Operation Research, Statistics, Econometrics or in any quantitative field Experience in using Python, R, SAS Experience in working with large data sets and big data systems (SQL, Hadoop, Hive, etc.) Keen aptitude for large-scale data analysis with a passion for identifying key insights from data Expert working knowledge in various machine learning algorithms such XGBoost, SVM Etc. We are looking candidates from the following: Experience in Unsecured Loans & SME Loans analytics (cards, installment loans) - risk based pricing analytics Experience in Differential pricing / selection analytics (retail, airlines / travel etc). Experience in Digital product companies or Digital eCommerce with Product mindset and experience Experience in Fraud / Risk from Banks, NBFC / Fintech / Credit Bureau Experience in Online media with knowledge of media, online ads & sales (agencies) - Knowledge of DMP, DFP, Adobe/Omniture tools, Cloud Experience in Consumer Durable Loans lending companies (Experience in Credit Cards, Personal Loan - optional) Experience in Tractor Loans lending companies (Experience in Farm) Experience in Recovery, Collections analytics Experience in Marketing Analytics with Digital Marketing, Market Mix modelling, Advertising Technology
Read more
High-Growth Fintech Startup
Agency job
via Unnati by Ramya Senthilnathan
Remote, Mumbai
3 - 5 yrs
₹7L - ₹10L / yr
Business Intelligence (BI)
PowerBI
Analytics
Reporting
Data management
+5 more
Want to join the trailblazing Fintech company which is leveraging software and technology to change the face of short-term financing in India!

Our client is an innovative Fintech company that is revolutionizing the business of short term finance. The company is an online lending startup that is driven by an app-enabled technology platform to solve the funding challenges of SMEs by offering quick-turnaround, paperless business loans without collateral. It counts over 2 million small businesses across 18 cities and towns as its customers. Its founders are IIT and ISB alumni with deep experience in the fin-tech industry, from earlier working with organizations like Axis Bank, Aditya Birla Group, Fractal Analytics, and Housing.com. It has raised funds of Rs. 100 Crore from finance industry stalwarts and is growing by leaps and bounds.
 
As a Data Analyst - SQL, you will be working on projects in the Analytics function to generate insights for business as well as manage reporting for the management for all things related to Lending.
 
You will be part of a rapidly growing tech-driven organization and will be responsible for generating insights that will drive business impact and productivity improvements.
 
What you will do:
  • Ensuring ease of data availability, with relevant dimensions, using Business Intelligence tools.
  • Providing strong reporting and analytical information support to the management team.
  • Transforming raw data into essential metrics basis needs of relevant stakeholders.
  • Performing data analysis for generating reports on a periodic basis.
  • Converting essential data into easy to reference visuals using Data Visualization tools (PowerBI, Metabase).
  • Providing recommendations to update current MIS to improve reporting efficiency and consistency.
  • Bringing fresh ideas to the table and keen observers of trends in the analytics and financial services industry.

 

 

What you need to have:
  • MBA/ BE/ Graduate, with work experience of 3+ years.
  • B.Tech /B.E.; MBA / PGDM
  • Experience in Reporting, Data Management (SQL, MongoDB), Visualization (PowerBI, Metabase, Data studio)
  • Work experience (into financial services, Indian Banks/ NBFCs in-house analytics units or Fintech/ analytics start-ups would be a plus.)
Skills:
  • Skilled at writing & optimizing large complicated SQL queries & MongoDB scripts.
  • Strong knowledge of Banking/ Financial Services domain
  • Experience with some of the modern relational databases
  • Ability to work on multiple projects of different nature and self- driven,
  • Liaise with cross-functional teams to resolve data issues and build strong reports

 

Read more
Carbynetech
at Carbynetech
3 recruiters
Sahithi Kandlakunta
Posted by Sahithi Kandlakunta
Hyderabad
3 - 9 yrs
₹6L - ₹8L / yr
PowerBI
DAX
MDX
Windows Azure
Databricks
+1 more
Should have Experience in building and delivery of dashboard and analytics solutions using Microsofts Power BI and related Azure data services.
Should have Business Intelligence Experience in a data warehouse environment
Should have good experience in writing Power Query, DAX, MDX for complex data projects
Good on Rest Services including the API documentation.

Should have Experience authoring, diagnosing, and altering SQL Server objects and T-SQL
queries

Should have worked on Tabular models in Azure Analysis Services or SSAS

Should have Experience in Microsoft Azure Platform
Read more
NeenOpal Intelligent Solutions Private Limited
Pavel Gupta
Posted by Pavel Gupta
Remote, Bengaluru (Bangalore)
2 - 5 yrs
₹6L - ₹12L / yr
ETL
Python
Amazon Web Services (AWS)
SQL
PostgreSQL

We are actively seeking a Senior Data Engineer experienced in building data pipelines and integrations from 3rd party data sources by writing custom automated ETL jobs using Python. The role will work in partnership with other members of the Business Analytics team to support the development and implementation of new and existing data warehouse solutions for our clients. This includes designing database import/export processes used to generate client data warehouse deliverables.

 

Requirements
  • 2+ Years experience as an ETL developer with strong data architecture knowledge around data warehousing concepts, SQL development and optimization, and operational support models.
  • Experience using Python to automate ETL/Data Processes jobs.
  • Design and develop ETL and data processing solutions using data integration tools, python scripts, and AWS / Azure / On-Premise Environment.
  • Experience / Willingness to learn AWS Glue / AWS Data Pipeline / Azure Data Factory for Data Integration.
  • Develop and create transformation queries, views, and stored procedures for ETL processes, and process automation.
  • Document data mappings, data dictionaries, processes, programs, and solutions as per established standards for data governance.
  • Work with the data analytics team to assess and troubleshoot potential data quality issues at key intake points such as validating control totals at intake and then upon transformation, and transparently build lessons learned into future data quality assessments
  • Solid experience with data modeling, business logic, and RESTful APIs.
  • Solid experience in the Linux environment.
  • Experience with NoSQL / PostgreSQL preferred
  • Experience working with databases such as MySQL, NoSQL, and Postgres, and enterprise-level connectivity experience (such as connecting over TLS and through proxies).
  • Experience with NGINX and SSL.
  • Performance tune data processes and SQL queries, and recommend and implement data process optimization and query tuning techniques.
Read more
Pune
10 - 18 yrs
₹35L - ₹40L / yr
Google Cloud Platform (GCP)
Dataflow architecture
Data migration
Data processing
Big Data
+4 more

CANDIDATE WILL BE DEPLOYED IN A FINANCIAL CAPTIVE ORGANIZATION @ PUNE (KHARADI)

 

Below are the job Details :-

 

Experience 10 to 18 years

 

Mandatory skills –

  • data migration,
  • data flow

The ideal candidate for this role will have the below experience and qualifications:  

  • Experience of building a range of Services in a Cloud Service provider (ideally GCP)  
  • Hands-on design and development of Google Cloud Platform (GCP), across a wide range of GCP services including hands on experience of GCP storage & database technologies. 
  • Hands-on experience in architecting, designing or implementing solutions on GCP, K8s, and other Google technologies. Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools 
  • Desired Skills within the GCP stack - Cloud Run, GKE, Serverless, Cloud Functions, Vision API, DLP, Data Flow, Data Fusion 
  • Prior experience of migrating on-prem applications to cloud environments. Knowledge and hands on experience on Stackdriver, pub-sub, VPC, Subnets, route tables, Load balancers, firewalls both for on premise and the GCP.  
  • Integrate, configure, deploy and manage centrally provided common cloud services (e.g. IAM, networking, logging, Operating systems, Containers.)  
  • Manage SDN in GCP Knowledge and experience of DevOps technologies around Continuous Integration & Delivery in GCP using Jenkins.  
  • Hands on experience of Terraform, Kubernetes, Docker, Stackdriver, Terraform  
  • Programming experience in one or more of the following languages: Python, Ruby, Java, JavaScript, Go, Groovy, Scala  
  • Knowledge or experience in DevOps tooling such as Jenkins, Git, Ansible, Splunk, Jira or Confluence, AppD, Docker, Kubernetes  
  • Act as a consultant and subject matter expert for internal teams to resolve technical deployment obstacles, improve product's vision. Ensure compliance with centrally defined Security 
  • Financial experience is preferred 
  • Ability to learn new technologies and rapidly prototype newer concepts 
  • Top-down thinker, excellent communicator, and great problem solver

 

Exp:- 10  to 18 years

 

Location:- Pune

 

Candidate must have experience in below.

  • GCP Data Platform
  • Data Processing:- Data Flow, Data Prep, Data Fusion
  • Data Storage:- Big Query, Cloud Sql,
  • Pub Sub, GCS Bucket
Read more
Pluto Seven Business Solutions Pvt Ltd
Sindhu Narayan
Posted by Sindhu Narayan
Bengaluru (Bangalore)
3 - 9 yrs
₹6L - ₹18L / yr
MySQL
Python
Big Data
Google Cloud Storage
API
+3 more
Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos