Data Engineer

at NSEIT

DP
Posted by Vishal Pednekar
icon
Remote only
icon
7 - 12 yrs
icon
₹20L - ₹40L / yr (ESOP available)
icon
Full time
Skills
Data engineering
Big Data
Data Engineer
Amazon Web Services (AWS)
NOSQL Databases
Programming
  • Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories…
  • Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
  • Experience Apache Spark Programming with Databricks
  • Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search
  • Hands on experience with leveraging CI/CD to rapidly build & test application code
  • Expertise in Data governance and Data Quality
  • Experience working with PCI Data and working with data scientists is a plus
  • At least 4+ years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
  • 5+ years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies
Read more

About NSEIT

Founded
1999
Type
Size
100-1000
Stage
Profitable
About

NSEIT is a global technology firm with a focus on the financial services industry. We are a vertical specialist organization with domain expertise and technology focus aligned to the needs of financial institutions. We offer Application Services, IT Enabled Services (Assessments), Testing Center of Excellence, Infrastructure Services, Integrated Security Response Center and Analytics as a Service primarily for the BFSI segment. 

We are a 100% subsidiary of National Stock Exchange of India Limited (NSEIL). Being a part of the stock exchange our solutions inherently encapsulate industry strength, security, scalability, reliability and performance features.

Our focus on domain and key technologies enables us to use new trends in digital technologies like cloud computing, mobility and analytics while building solutions for our customers.

We are passionate about building innovative, futuristic and robust solutions for our customers. We have been assessed at Maturity Level 5 in Capability Maturity Model Integration for Development (CMMI® - DEV) v 1.3. We are also certified for ISO 9001:2015 for providing high quality products and services, and ISO 27001:2013 for our Information Security Management Systems.

Our offices are located in India and the US.

Read more
Connect with the team
icon
Akansha Singh
icon
Vishal Pednekar
icon
Raju Soni
icon
Manish Buswala
Company social profiles
icon
icon
icon
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Information Solution Provider Company
Agency job
via Jobdost by Sathish Kumar
Delhi, Gurugram, Noida, Ghaziabad, Faridabad
3 - 7 yrs
₹10L - ₹15L / yr
SQL
Hadoop
Spark
Machine Learning (ML)
Data Science
+3 more

Job Description:

The data science team is responsible for solving business problems with complex data. Data complexity could be characterized in terms of volume, dimensionality and multiple touchpoints/sources. We understand the data, ask fundamental-first-principle questions, apply our analytical and machine learning skills to solve the problem in the best way possible. 

 

Our ideal candidate

The role would be a client facing one, hence good communication skills are a must. 

The candidate should have the ability to communicate complex models and analysis in a clear and precise manner. 

 

The candidate would be responsible for:

  • Comprehending business problems properly - what to predict, how to build DV, what value addition he/she is bringing to the client, etc.
  • Understanding and analyzing large, complex, multi-dimensional datasets and build features relevant for business
  • Understanding the math behind algorithms and choosing one over another
  • Understanding approaches like stacking, ensemble and applying them correctly to increase accuracy

Desired technical requirements

  • Proficiency with Python and the ability to write production-ready codes. 
  • Experience in pyspark, machine learning and deep learning
  • Big data experience, e.g. familiarity with Spark, Hadoop, is highly preferred
  • Familiarity with SQL or other databases.
Read more
a global business process management company
Agency job
via Jobdost by Saida Jabbar
Bengaluru (Bangalore)
3 - 8 yrs
₹14L - ₹20L / yr
Business Intelligence (BI)
PowerBI
Windows Azure
Git
SVN
+9 more

Power BI Developer(Azure Developer )

Job Description:

Senior visualization engineer with understanding in Azure Data Factory & Databricks to develop and deliver solutions that enable delivery of information to audiences in support of key business processes.

Ensure code and design quality through execution of test plans and assist in development of standards & guidelines working closely with internal and external design, business and technical counterparts.

 

Desired Competencies:

  • Strong designing concepts of data visualization centered on business user and a knack of communicating insights visually.
  • Ability to produce any of the charting methods available with drill down options and action-based reporting. This includes use of right graphs for the underlying data with company themes and objects.
  • Publishing reports & dashboards on reporting server and providing role-based access to users.
  • Ability to create wireframes on any tool for communicating the reporting design.
  • Creation of ad-hoc reports & dashboards to visually communicate data hub metrics (metadata information) for top management understanding.
  • Should be able to handle huge volume of data from databases such as SQL Server, Synapse, Delta Lake or flat files and create high performance dashboards.
  • Should be good in Power BI development
  • Expertise in 2 or more BI (Visualization) tools in building reports and dashboards.
  • Understanding of Azure components like Azure Data Factory, Data lake Store, SQL Database, Azure Databricks
  • Strong knowledge in SQL queries
  • Must have worked in full life-cycle development from functional design to deployment
  • Intermediate understanding to format, process and transform data
  • Should have working knowledge of GIT, SVN
  • Good experience in establishing connection with heterogeneous sources like Hadoop, Hive, Amazon, Azure, Salesforce, SAP, HANA, API’s, various Databases etc.
  • Basic understanding of data modelling and ability to combine data from multiple sources to create integrated reports

 

Preferred Qualifications:

  • Bachelor's degree in Computer Science or Technology
  • Proven success in contributing to a team-oriented environment
Read more
at JMAN group
1 recruiter
DP
Posted by JMAN Digital Services P Ltd
Chennai
2 - 5 yrs
₹5L - ₹14L / yr
SQL
Python
Java
ADF
Snow flake schema
+3 more
To all the #Dataengineers, we have an immediate requirement for the #Chennai location.

**Education:
Qualification – Any engineering graduate with STRONG programming and logical reasoning skills.

**Minimum years of Experience:2 – 5 years**

Required Skills:
Previous experience as a Data Engineer or in a similar role.
Technical expertise with data models, data mining, and segmentation techniques.

**Knowledge of programming languages (e. g. Java and Python).
Hands-on experience with SQL Programming
Hands-on experience with Python Programming
Knowledge of these tools DBT, ADF, Snowflakes, and Databricks would be added advantage for our current project.**

Strong numerical and analytical skills.
Experience in dealing directly with customers and internal sales organizations.

Strong written and verbal communication, including technical writing skills.

Good to have: Hands-on experience in Cloud services.
Knowledge with ML
Data Warehouse builds (DB, SQL, ETL, Reporting Tools like Power BI…)

Do share your profile to  gayathrirajagopalan @jmangroup.com
Read more
People Impact
Agency job
via People Impact by Pruthvi K
Remote only
4 - 10 yrs
₹10L - ₹20L / yr
Amazon Redshift
Datawarehousing
Amazon Web Services (AWS)
Snow flake schema
Data Warehouse (DWH)

Job Title: Data Warehouse/Redshift Admin

Location: Remote

Job Description

AWS Redshift Cluster Planning

AWS Redshift Cluster Maintenance

AWS Redshift Cluster Security

AWS Redshift Cluster monitoring.

Experience managing day to day operations of provisioning, maintaining backups, DR and monitoring of AWS RedShift/RDS clusters

Hands-on experience with Query Tuning in high concurrency environment

Expertise setting up and managing AWS Redshift

AWS certifications Preferred (AWS Certified SysOps Administrator)

Read more
at Angel One
4 recruiters
DP
Posted by Andleeb Mujeeb
Remote only
2 - 6 yrs
₹12L - ₹18L / yr
Amazon Web Services (AWS)
PySpark
Python
Scala
Go Programming (Golang)
+19 more

Designation: Specialist - Cloud Service Developer (ABL_SS_600)

Position description:

  • The person would be primary responsible for developing solutions using AWS services. Ex: Fargate, Lambda, ECS, ALB, NLB, S3 etc.
  • Apply advanced troubleshooting techniques to provide Solutions to issues pertaining to Service Availability, Performance, and Resiliency
  • Monitor & Optimize the performance using AWS dashboards and logs
  • Partner with Engineering leaders and peers in delivering technology solutions that meet the business requirements 
  • Work with the cloud team in agile approach and develop cost optimized solutions

 

Primary Responsibilities:

  • Develop solutions using AWS services includiing Fargate, Lambda, ECS, ALB, NLB, S3 etc.

 

Reporting Team

  • Reporting Designation: Head - Big Data Engineering and Cloud Development (ABL_SS_414)
  • Reporting Department: Application Development (2487)

Required Skills:

  • AWS certification would be preferred
  • Good understanding in Monitoring (Cloudwatch, alarms, logs, custom metrics, Trust SNS configuration)
  • Good experience with Fargate, Lambda, ECS, ALB, NLB, S3, Glue, Aurora and other AWS services. 
  • Preferred to have Knowledge on Storage (S3, Life cycle management, Event configuration)
  • Good in data structure, programming in (pyspark / python / golang / Scala)
Read more
at Crewscale
6 recruiters
DP
Posted by vinodh Rajamani
Remote only
2 - 6 yrs
₹4L - ₹40L / yr
Python
SQL
Amazon Web Services (AWS)
ETL
Informatica
+2 more
Crewscale – Toplyne Collaboration:

The present role is a Data engineer role for Crewscale– Toplyne Collaboration.
Crewscale is exclusive partner of Toplyne.

About Crewscale:
Crewscale is a premium technology company focusing on helping companies building world
class scalable products. We are a product based start-up having a code assessment platform
which is being used top technology disrupters across the world.

Crewscale works with premium product companies (Indian and International) like - Swiggy,
ShareChat Grab, Capillary, Uber, Workspan, Ovo and many more. We are responsible for
managing infrastructure for Swiggy as well.
We focus on building only world class tech product and our USP is building technology can
handle scale from 1 million to 1 billion hits.

We invite candidates who have a zeal to develop world class products to come and work with us.

Toplyne

Who are we? 👋

Toplyne is a global SaaS product built to help revenue teams, at businesses with a self-service motion, and a large user-base, identify which users to spend time on, when and for what outcome. Think self-service or freemium-led companies like Figma, Notion, Freshworks, and Slack. We do this by helping companies recognize signals across their - product engagement, sales, billing, and marketing data.

Founded in June 2021, Toplyne is backed by marquee investors like Sequoia,Together fund and a bunch of well known angels. You can read more about us on -  https://bit.ly/ForbesToplyne  ,  https://bit.ly/YourstoryToplyne .

What will you get to work on? 🏗️

  • Design, Develop and maintain scalable data pipelines and Data warehouse to support continuing increases in data volume and complexity.

  • Develop and implement processes and systems to supervise data quality, data mining and ensuring production data is always accurate and available for key partners and business processes that depend on it.

  • Perform data analysis required to solve data related issues and assist in the resolution of data issues.

  • Complete ownership - You’ll build highly scalable platforms and services that support rapidly growing data needs in Toplyne. There’s no instruction book, it’s yours to write. You’ll figure it out, ship it, and iterate.

What do we expect from you? 🙌🏻

  • 3-6 years of relevant work experience in a Data Engineering role.

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

  • Experience building and optimising data pipelines, architectures and data sets.

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.

  • Strong analytic skills related to working with unstructured datasets.

  • Good understanding of Airflow, Spark, NoSql databases, Kakfa is nice to have.

Read more
at SteelEye
1 video
3 recruiters
DP
Posted by Arjun Shivraj
Bengaluru (Bangalore)
5 - 20 yrs
₹20L - ₹35L / yr
Python
ETL
Big Data
Amazon Web Services (AWS)
pandas

What you’ll do

  • Deliver plugins for our Python-based ETL pipelines.
  • Deliver Python microservices for provisioning and managing cloud infrastructure.
  • Implement algorithms to analyse large data sets.
  • Draft design documents that translate requirements into code.
  • Deal with challenges associated with handling large volumes of data.
  • Assume responsibilities from technical design through technical client support.
  • Manage expectations with internal stakeholders and context-switch in a fast paced environment.
  • Thrive in an environment that uses AWS and Elasticsearch extensively.
  • Keep abreast of technology and contribute to the engineering strategy.
  • Champion best development practices and provide mentorship.

What we’re looking for

  • Experience in Python 3.
  • Python libraries used for data (such as pandas, numpy).
  • AWS.
  • Elasticsearch.
  • Performance tuning.
  • Object Oriented Design and Modelling.
  • Delivering complex software, ideally in a FinTech setting.
  • CI/CD tools.
  • Knowledge of design patterns.
  • Sharp analytical and problem-solving skills.
  • Strong sense of ownership.
  • Demonstrable desire to learn and grow.
  • Excellent written and oral communication skills.
  • Mature collaboration and mentoring abilities.

About SteelEye Culture

  • Work from home until you are vaccinated against COVID-19
  • Top of the line health insurance • Order discounted meals every day from a dedicated portal
  • Fair and simple salary structure
  • 30+ holidays in a year
  • Fresh fruits every day
  • Centrally located. 5 mins to the nearest metro station (MG Road)
  • Measured on output and not input
Read more
at Datametica Solutions Private Limited
1 video
7 recruiters
DP
Posted by Sumangali Desai
Pune, Hyderabad
7 - 12 yrs
₹7L - ₹20L / yr
Apache Spark
Big Data
Spark
Scala
Hadoop
+3 more
We at Datametica Solutions Private Limited are looking for Big Data Spark Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytics including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike.
Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators.

Job Description
Experience : 7+ years
Location : Pune / Hyderabad
Skills :
  • Drive and participate in requirements gathering workshops, estimation discussions, design meetings and status review meetings
  • Participate and contribute in Solution Design and Solution Architecture for implementing Big Data Projects on-premise and on cloud
  • Technical Hands on experience in design, coding, development and managing Large Hadoop implementation
  • Proficient in SQL, Hive, PIG, Spark SQL, Shell Scripting, Kafka, Flume, Scoop with large Big Data and Data Warehousing projects with either Java, Python or Scala based Hadoop programming background
  • Proficient with various development methodologies like waterfall, agile/scrum and iterative
  • Good Interpersonal skills and excellent communication skills for US and UK based clients

About Us!
A global Leader in the Data Warehouse Migration and Modernization to the Cloud, we empower businesses by migrating their Data/Workload/ETL/Analytics to the Cloud by leveraging Automation.

We have expertise in transforming legacy Teradata, Oracle, Hadoop, Netezza, Vertica, Greenplum along with ETLs like Informatica, Datastage, AbInitio & others, to cloud-based data warehousing with other capabilities in data engineering, advanced analytics solutions, data management, data lake and cloud optimization.

Datametica is a key partner of the major cloud service providers - Google, Microsoft, Amazon, Snowflake.


We have our own products!
Eagle –
Data warehouse Assessment & Migration Planning Product
Raven –
Automated Workload Conversion Product
Pelican -
Automated Data Validation Product, which helps automate and accelerate data migration to the cloud.

Why join us!
Datametica is a place to innovate, bring new ideas to live and learn new things. We believe in building a culture of innovation, growth and belonging. Our people and their dedication over these years are the key factors in achieving our success.

Benefits we Provide!
Working with Highly Technical and Passionate, mission-driven people
Subsidized Meals & Snacks
Flexible Schedule
Approachable leadership
Access to various learning tools and programs
Pet Friendly
Certification Reimbursement Policy

Check out more about us on our website below!
www.datametica.com
Read more
at Spoonshot Inc.
3 recruiters
DP
Posted by Rajesh Bhutada
Bengaluru (Bangalore)
1 - 4 yrs
₹9L - ₹15L / yr
Data Analytics
Data Visualization
Analytics
SQLite
PowerBI
+5 more
- Prior experience in Business Analytics and knowledge of related analysis or visualization tools
- Expecting a minimum of 2-4 years of relevant experience
- You will be managing a team of 3 currently
- Take up the ownership of developing and managing one of the largest and richest food (recipe, menu, and CPG) databases
- Interactions with cross-functional teams (Business, Food Science, Product, and Tech) on a regular basis to pan the future of client and internal food data management
- Should have a natural flair for playing with numbers and data and have a keen eye for detail and quality
- Will spearhead the Ops team in achieving the targets while maintaining a staunch attentiveness to Coverage, Completeness, and Quality of the data
- Shall program and manage projects while identifying opportunities to optimize costs and processes.
- Good business acumen, in creating logic & process flows, quick and smart decision-making skills are expected
- Will also be responsible for the recruitment, induction and training new members as well
- Setting competitive team targets. Guide and support the team members to go the extra mile and achieve set targets


Added Advantages :
- Experience in a Food Sector / Insights company
- Has a passion for exploring different cuisines
- Understands industry-related jargons and has a natural flair towards learning more about anything related to food
Read more
at OpexAI
1 recruiter
DP
Posted by Jasmine Shaik
Hyderabad
0 - 1 yrs
₹1L - ₹1L / yr
Business Intelligence (BI)
Python
Big Data
Bigdata, Business intelligence , python, R with their skills
Read more
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at NSEIT?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort