Big Data Developer / Lead / Architect

at Telecom Client

Agency job
icon
Chennai
icon
5 - 13 yrs
icon
₹9L - ₹28L / yr
icon
Full time
Skills
Spark
Hadoop
Big Data
Data engineering
PySpark
Apache Hive
ETL
.NET
Microsoft Windows Azure
PowerBI
Apache Kafka
  • Demonstrable experience owning and developing big data solutions, using Hadoop, Hive/Hbase, Spark, Databricks, ETL/ELT for 5+ years

·       10+ years of Information Technology experience, preferably with Telecom / wireless service providers.

·       Experience in designing data solution following Agile practices (SAFe methodology); designing for testability, deployability and releaseability; rapid prototyping, data modeling, and decentralized innovation

  • DataOps mindset: allowing the architecture of a system to evolve continuously over time, while simultaneously supporting the needs of current users
  • Create and maintain Architectural Runway, and Non-Functional Requirements.
  • Design for Continuous Delivery Pipeline (CI/CD data pipeline) and enables Built-in Quality & Security from the start.

·       To be able to demonstrate an understanding and ideally use of, at least one recognised architecture framework or standard e.g. TOGAF, Zachman Architecture Framework etc

·       The ability to apply data, research, and professional judgment and experience to ensure our products are making the biggest difference to consumers

·       Demonstrated ability to work collaboratively

·       Excellent written, verbal and social skills - You will be interacting with all types of people (user experience designers, developers, managers, marketers, etc.)

·       Ability to work in a fast paced, multiple project environment on an independent basis and with minimal supervision

·       Technologies: .NET, AWS, Azure; Azure Synapse, Nifi, RDS, Apache Kafka, Azure Data bricks, Azure datalake storage, Power BI, Reporting Analytics, QlickView, SQL on-prem Datawarehouse; BSS, OSS & Enterprise Support Systems

Read more
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Data Engineer

at Klubworks

Founded 2019  •  Product  •  20-100 employees  •  Raised funding
Spark
Hadoop
Big Data
Data engineering
PySpark
Python
C++
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
Best in industry

We are searching for an accountable, multitalented data engineer to facilitate the operations of our data scientists. The data engineer will be responsible for employing machine learning techniques to create and sustain structures that allow for the analysis of data while remaining familiar with dominant programming and deployment strategies in the field. During various aspects of this process, you should collaborate with coworkers to ensure that your approach meets the needs of each project.

To ensure success as a data engineer, you should demonstrate flexibility, creativity, and the capacity to receive and utilize constructive criticism. A formidable data engineer will demonstrate unsatiated curiosity and outstanding interpersonal skills.

Responsibilities:

  • Liaising with coworkers and clients to elucidate the requirements for each task.
  • Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed.
  • Reformulating existing frameworks to optimize their functioning.
  • Testing such structures to ensure that they are fit for use.
  • Preparing raw data for manipulation by data scientists.
  • Detecting and correcting errors in your work.
  • Ensuring that your work remains backed up and readily accessible to relevant coworkers.
  • Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.


Requirements:

  • Bachelor's degree in data engineering, big data analytics, computer engineering, or related field.
  • Master's degree in a relevant field is advantageous.
  • Proven experience as a data engineer, software developer, or similar.
  • Expert proficiency in Python, C++, Java, R, and SQL.
  • Familiarity with Hadoop or suitable equivalent.
  • Excellent analytical and problem-solving skills.
  • A knack for independence and group work.
  • Scrupulous approach to duties.
  • Capacity to successfully manage a pipeline of duties with minimal supervision.
Read more
Job posted by
Anupam Arya

Data Science - Risk

at Rupifi

Founded 2020  •  Product  •  20-100 employees  •  Raised funding
Data Analytics
Risk Management
Risk analysis
Data Science
Machine Learning (ML)
Python
SQL
Data Visualization
Big Data
Tableau
Data Structures
icon
Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹50L / yr

Data Scientist (Risk)/Sr. Data Scientist (Risk)


As a part of the Data science/Analytics team at Rupifi, you will play a significant role in  helping define the business/product vision and deliver it from the ground up by working with  passionate high-performing individuals in a very fast-paced working environment. 


You will work closely with Data Scientists & Analysts, Engineers, Designers, Product  Managers, Ops Managers and Business Leaders, and help the team make informed data  driven decisions and deliver high business impact.


Preferred Skills & Responsibilities: 

  1. Analyze data to better understand potential risks, concerns and outcomes of decisions.
  2. Aggregate data from multiple sources to provide a comprehensive assessment.
  3. Past experience of working with business users to understand and define inputs for risk models.
  4. Ability to design and implement best in class Risk Models in Banking & Fintech domain.
  5. Ability to quickly understand changing market trends and incorporate them into model inputs.
  6. Expertise in statistical analysis and modeling.
  7. Ability to translate complex model outputs into understandable insights for business users.
  8. Collaborate with other team members to effectively analyze and present data.
  9. Conduct research into potential clients and understand the risks of accepting each one.
  10. Monitor internal and external data points that may affect the risk level of a decision.

Tech skills: 

  • Hands-on experience in Python & SQL.
  • Hands-on experience in any visualization tool preferably Tableau
  • Hands-on experience in Machine & Deep Learning area
  • Experience in handling complex data sources
  • Experience in modeling techniques in the fintech/banking domain
  • Experience of working on Big data and distributed computing.

Preferred Qualifications: 

  • A BTech/BE/MSc degree in Math, Engineering, Statistics, Economics, ML, Operations  Research, or similar quantitative field.
  • 3 to 10 years of modeling experience in the fintech/banking domain in fields like collections, underwriting, customer management, etc.
  • Strong analytical skills with good problem solving ability
  • Strong presentation and communication skills
  • Experience in working on advanced machine learning techniques
  • Quantitative and analytical skills with a demonstrated ability to understand new analytical concepts.
Read more
Job posted by
Richa Tiwari

Big Data Lead

at Rishabh Software

Founded 2001  •  Products & Services  •  100-1000 employees  •  Profitable
Spark
Hadoop
Big Data
Data engineering
PySpark
Apache Hive
Pig
Scala
icon
Remote only
icon
5 - 7 yrs
icon
₹9L - ₹23L / yr

Greetings from Rishabh Software!!


An opportunity to enhance your career as big data developer awits at your doorstep.

Here is brief about our company followed by JD:

About Us:

Rishabh Software(CMMi Level 3), an India based IT service provider, focuses on cost-effective, qualitative and timely delivered Offshore Software Development, Business Process Outsourcing (BPO) and Engineering Services. Our Core competency lies in developing customized software solutions using web-based and client/server technology.

With over 20 years of Software Development Experience working together with various domestic and international companies, we, at Rishabh Software, provide specific solutions as per the client requirements that help industries of different domains to change business problems into strategic advantages.

Through our offices in the US (Silicon Valley), UK (London) and India (Vadodara & Bangalore) we service our global clients with qualitative and well-executed software development, BPO and Engineering services.

Please visit our URL www.rishabhsoft.com

 

Job Description:

We are looking to hire a talented big data engineer to develop and manage our company’s Big Data solutions. In this role, you will be required to design and implement Big Data tools and frameworks, implement ELT processes, collaborate with development teams, build cloud platforms, and maintain the production system.

To ensure success as a big data engineer, you should have in-depth knowledge of Hadoop technologies and excellent problem-solving skills. A top-notch Big Data Engineer understands the needs of the company and institutes scalable data solutions for its current and future needs.

Responsibilities:

  • Regular interaction with client and internal team to understand the Big Data needs.
  • Developing Hadoop systems.
  • Loading disparate data sets and conducting pre-processing services using Hive or Pig.
  • Finalizing the scope of the system and delivering Big Data solutions.
  • Managing the communications between the internal system and the survey vendor.
  • Collaborating with the software research and development teams.
  • Building cloud platforms for the development of company applications.
  • Maintaining production systems.
  • Training staff on data resource management.

Requirements:

  • Bachelor’s degree in computer engineering or computer science
  • Previous experience as a big data engineer/developer
  • In-depth knowledge of Apache Spark, Hadoop, Sqoop, Pig, Hive and Kafka
  • In-depth knowledge and understanding of AWS cloud
  • Knowledge on one of the languages like Java, C++, Linux, Ruby, PHP, Python, and R
  • Knowledge of NoSQL and RDBMS databases including Redis and MongoDB
  • Excellent problem management skills
  • Training newly joined resource & support them to make their journey successful within the organization
  • Good communication skills
  • Ability to solve complex networking, data, and software issues


We will be happy to receive your response with latest resume if your profile matches with above JD and we shall connect back to have detailed discussion.

Read more
Job posted by
Baiju Sukumaran
Data Visualization
PowerBI
ETL
Business Intelligence (BI)
Data Analytics
SQL
Apache Hive
Spark
Informatica
OLAP
oracle BI
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹10L - ₹25L / yr
Main Responsibilities:

 Work closely with different Front Office and Support Function stakeholders including but not restricted to Business
Management, Accounts, Regulatory Reporting, Operations, Risk, Compliance, HR on all data collection and reporting use cases.
 Collaborate with Business and Technology teams to understand enterprise data, create an innovative narrative to explain, engage and enlighten regular staff members as well as executive leadership with data-driven storytelling
 Solve data consumption and visualization through data as a service distribution model
 Articulate findings clearly and concisely for different target use cases, including through presentations, design solutions, visualizations
 Perform Adhoc / automated report generation tasks using Power BI, Oracle BI, Informatica
 Perform data access/transfer and ETL automation tasks using Python, SQL, OLAP / OLTP, RESTful APIs, and IT tools (CFT, MQ-Series, Control-M, etc.)
 Provide support and maintain the availability of BI applications irrespective of the hosting location
 Resolve issues escalated from Business and Functional areas on data quality, accuracy, and availability, provide incident-related communications promptly
 Work with strict deadlines on high priority regulatory reports
 Serve as a liaison between business and technology to ensure that data related business requirements for protecting sensitive data are clearly defined, communicated, and well understood, and considered as part of operational
prioritization and planning
 To work for APAC Chief Data Office and coordinate with a fully decentralized team across different locations in APAC and global HQ (Paris).

General Skills:
 Excellent knowledge of RDBMS and hands-on experience with complex SQL is a must, some experience in NoSQL and Big Data Technologies like Hive and Spark would be a plus
 Experience with industrialized reporting on BI tools like PowerBI, Informatica
 Knowledge of data related industry best practices in the highly regulated CIB industry, experience with regulatory report generation for financial institutions
 Knowledge of industry-leading data access, data security, Master Data, and Reference Data Management, and establishing data lineage
 5+ years experience on Data Visualization / Business Intelligence / ETL developer roles
 Ability to multi-task and manage various projects simultaneously
 Attention to detail
 Ability to present to Senior Management, ExCo; excellent written and verbal communication skills
Read more
Job posted by
Naveen Taalanki

Senior Data Engineer

at Bookr Inc

Founded 2019  •  Products & Services  •  20-100 employees  •  Raised funding
Big Data
Hadoop
Spark
Data engineering
Data Warehouse (DWH)
ETL
EMR
Amazon Redshift
PostgreSQL
SQL
Scala
Java
Python
airflow
icon
Remote, Chennai, Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹35L / yr

In this role you'll get.

  • Being part of core team member for data platform, setup platform foundation while adhering all required quality standards and design patterns
  • Write efficient and quality code that can scale
  • Adopt Bookr quality standards, recommend process standards and best practices
  • Research, learn & adapt new technologies to solve problems & improve existing solutions
  • Contribute to engineering excellence backlog
  • Identify performance issues
  • Effective code and design reviews
  • Improve reliability of overall production system by proactively identifying patterns of failure
  • Leading and mentoring junior engineers by example
  • End-to-end ownership of stories (including design, serviceability, performance, failure handling)
  • Strive hard to provide the best experience to anyone using our products
  • Conceptualise innovative and elegant solutions to solve challenging big data problems
  • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products
  • Adhere to company policies, procedures, mission, values, and standards of ethics and integrity

 

On day one we'll expect you to.

  • B. E/B. Tech from a reputed institution
  • Minimum 5 years of software development experience and at least a year experience in leading/guiding people
  • Expert coding skills in Python/PySpark or Java/Scala
  • Deep understanding in Big Data Ecosystem - Hadoop and Spark
  • Must have project experience with Spark
  • Ability to independently troubleshoot Spark jobs
  • Good understanding of distributed systems
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Expert hands on experience with RDBMS
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Ability to work independently as well as working collaboratively in a team

 

Added bonuses you have.

  • Hands on experience with EMR/Glue/Data bricks
  • Hand on experience with Airflow
  • Hands on experience with AWS Big Data ecosystem

 

We are looking for passionate Engineers who are always hungry for challenging problems. We believe in creating opportunistic, yet balanced, work environment for savvy, entrepreneurial tech individuals. We are thriving on remote work with team working across multiple timezones.

 

 

  • Flexible hours & Remote work - We are a results focused bunch, so we encourage you to work whenever and wherever you feel most creative and focused.
  • Unlimited PTOWe want you to feel free to recharge your batteries when you need it!
  • Stock Options - Opportunity to participate in Company stock plan
  • Flat hierarchy - Team leaders at your fingertips
  • BFC(Stands for bureaucracy-free company). We're action oriented and don't bother with dragged-out meetings or pointless admin exercises - we'd rather get our hands dirty!
  • Working along side Leaders - You being part of core team, will give you opportunity to directly work with founding and management team

 

Read more
Job posted by
Nimish Mehta
Big Data
Spark
ETL
Apache
Hadoop
Data engineering
Amazon Web Services (AWS)
icon
Bengaluru (Bangalore), Hyderabad
icon
3 - 6 yrs
icon
₹10L - ₹15L / yr
Desired Skill, Experience, Qualifications, and Certifications:
• 5+ years’ experience developing and maintaining modern ingestion pipeline using
technologies like Spark, Apache Nifi etc).
• 2+ years’ experience with Healthcare Payors (focusing on Membership, Enrollment, Eligibility,
• Claims, Clinical)
• Hands on experience on AWS Cloud and its Native components like S3, Athena, Redshift &
• Jupyter Notebooks
• Strong in Spark Scala & Python pipelines (ETL & Streaming)
• Strong experience in metadata management tools like AWS Glue
• String experience in coding with languages like Java, Python
• Worked on designing ETL & streaming pipelines in Spark Scala / Python
• Good experience in Requirements gathering, Design & Development
• Working with cross-functional teams to meet strategic goals.
• Experience in high volume data environments
• Critical thinking and excellent verbal and written communication skills
• Strong problem-solving and analytical abilities, should be able to work and delivery
individually
• Good-to-have AWS Developer certified, Scala coding experience, Postman-API and Apache
Airflow or similar schedulers experience
• Nice-to-have experience in healthcare messaging standards like HL7, CCDA, EDI, 834, 835, 837
• Good communication skills
Read more
Job posted by
geeti gaurav mohanty

Senior Computer Vision Developer

at Lincode Labs India Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
OpenCV
Deep Learning
Artificial Intelligence (AI)
TensorFlow
Python
Apache Kafka
icon
Bengaluru (Bangalore)
icon
4 - 10 yrs
icon
₹6L - ₹20L / yr
Please apply if and only if you enjoy engineering, wish to write a lot of code, wish to do a lot of hands-on Python experimentation, already have in-depth knowledge of deep learning. This position is strictly for people having knowledge various Neural networks and can customize the neural network and not for people who have experience in downloading various AI/ML code.

This position is not for freshers. We are looking for candidates with AI/ML/CV experience of at least 4 year in the industry.
Read more
Job posted by
Ritika Nigam

ML Researcher

at Oil & Energy Industry

Machine Learning (ML)
Data Science
Deep Learning
Digital Signal Processing
Statistical signal processing
Python
Big Data
Linux/Unix
OpenCV
TensorFlow
Keras
icon
NCR (Delhi | Gurgaon | Noida)
icon
1 - 3 yrs
icon
₹8L - ₹12L / yr
Understanding business objectives and developing models that help to achieve them,
along with metrics to track their progress
Managing available resources such as hardware, data, and personnel so that deadlines
are met
Analysing the ML algorithms that could be used to solve a given problem and ranking
them by their success probability
Exploring and visualizing data to gain an understanding of it, then identifying
differences in data distribution that could affect performance when deploying the model
in the real world
Verifying data quality, and/or ensuring it via data cleaning
Supervising the data acquisition process if more data is needed
Defining validation strategies
Defining the pre-processing or feature engineering to be done on a given dataset
Defining data augmentation pipelines
Training models and tuning their hyper parameters
Analysing the errors of the model and designing strategies to overcome them
Deploying models to production
Read more
Job posted by
Susmita Mishra

freelance trainers

at Monkfox

Founded 2015  •  Services  •  0-20 employees  •  Bootstrapped
nano electronics
vehicle dynamics
computational dynamics
Android Development
Big Data
Industrial Design
Internet of Things (IOT)
Robotics
icon
Anywhere
icon
8 - 11 yrs
icon
₹5L - ₹10L / yr
We are a team with a mission, A mission to create and deliver great learning experiences to engineering students through various workshops and courses. If you are an industry professional and :- See great scope of improvement in higher technical education across the country and connect with our purpose of impacting it for good. Keen on sharing your technical expertise to enhance the practical learning of students. You are innovative in your ways of creating content and delivering them. You don’t mind earning few extra bucks while doing this in your free time. Buzz us at [email protected] and let us discuss how together we can take technological education in the country to new heights.
Read more
Job posted by
Tanu Mehra

Big Data Developer

at Mintifi

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
Big Data
Hadoop
MySQL
MongoDB
YARN
icon
Mumbai
icon
2 - 4 yrs
icon
₹6L - ₹15L / yr
Job Title: Software Developer – Big Data Responsibilities We are looking for a Big Data Developer who can drive innovation and take ownership and deliver results. • Understand business requirements from stakeholders • Build & own Mintifi Big Data applications • Be heavily involved in every step of the product development process, from ideation to implementation to release. • Design and build systems with automated instrumentation and monitoring • Write unit & integration tests • Collaborate with cross functional teams to validate and get feedback on the efficacy of results created by the big data applications. Use the feedback to improve the business logic • Proactive approach to turn ambiguous problem spaces into clear design solutions. Qualifications • Hands-on programming skills in Apache Spark using Java or Scala • Good understanding about Data Structures and Algorithms • Good understanding about relational and non-relational database concepts (MySQL, Hadoop, MongoDB) • Experience in Hadoop ecosystem components like YARN, Zookeeper would be a strong plus
Read more
Job posted by
Suchita Upadhyay
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at Telecom Client?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort