Data Engineer

at NSEIT

DP
Posted by Vishal Pednekar
icon
Remote only
icon
7 - 12 yrs
icon
₹20L - ₹40L / yr (ESOP available)
icon
Full time
Skills
Data engineering
Big Data
Data Engineer
Amazon Web Services (AWS)
NOSQL Databases
Programming
  • Design AWS data ingestion frameworks and pipelines based on the specific needs driven by the Product Owners and user stories…
  • Experience building Data Lake using AWS and Hands-on experience in S3, EKS, ECS, AWS Glue, AWS KMS, AWS Firehose, EMR
  • Experience Apache Spark Programming with Databricks
  • Experience working on NoSQL Databases such as Cassandra, HBase, and Elastic Search
  • Hands on experience with leveraging CI/CD to rapidly build & test application code
  • Expertise in Data governance and Data Quality
  • Experience working with PCI Data and working with data scientists is a plus
  • At least 4+ years of experience in the following Big Data frameworks: File Format (Parquet, AVRO, ORC), Resource Management, Distributed Processing and RDBMS
  • 5+ years of experience on designing and developing Data Pipelines for Data Ingestion or Transformation using AWS technologies
Read more

About NSEIT

NSEIT is a global technology firm with a focus on the financial services industry. We are a vertical specialist organization with domain expertise and technology focus aligned to the needs of financial institutions. We offer Application Services, IT Enabled Services (Assessments), Testing Center of Excellence, Infrastructure Services, Integrated Security Response Center and Analytics as a Service primarily for the BFSI segment. 

We are a 100% subsidiary of National Stock Exchange of India Limited (NSEIL). Being a part of the stock exchange our solutions inherently encapsulate industry strength, security, scalability, reliability and performance features.

Our focus on domain and key technologies enables us to use new trends in digital technologies like cloud computing, mobility and analytics while building solutions for our customers.

We are passionate about building innovative, futuristic and robust solutions for our customers. We have been assessed at Maturity Level 5 in Capability Maturity Model Integration for Development (CMMI® - DEV) v 1.3. We are also certified for ISO 9001:2015 for providing high quality products and services, and ISO 27001:2013 for our Information Security Management Systems.

Our offices are located in India and the US.

Read more
Founded
1999
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Lead Engineer - Data Quality

at Leading Sales Platform

Agency job
via Qrata
Big Data
ETL
Spark
Data engineering
Data governance
Informatica Data Quality
Java
Scala
Python
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹30L - ₹45L / yr
Work with product managers and development leads to create testing strategies · Develop and scale automated data validation framework · Build and monitor key metrics of data health across the entire Big Data pipelines · Early alerting and escalation process to quickly identify and remedy quality issues before something ever goes ‘live’ in front of the customer · Build/refine tools and processes for quick root cause diagnostics · Contribute to the creation of quality assurance standards, policies, and procedures to influence the DQ mind-set across the company
Required skills and experience: · Solid experience working in Big Data ETL environments with Spark and Java/Scala/Python · Strong experience with AWS cloud technologies (EC2, EMR, S3, Kinesis, etc) · Experience building monitoring/alerting frameworks with tools like Newrelic and escalations with slack/email/dashboard integrations, etc · Executive-level communication, prioritization, and team leadership skills
Read more
Job posted by
Blessy Fernandes

data engineer

at Information Solution Provider Company

Agency job
via Jobdost
Spark
Scala
Hadoop
PySpark
Data engineering
Big Data
Machine Learning (ML)
icon
Delhi
icon
3 - 5 yrs
icon
₹3L - ₹10L / yr

Data Engineer 

Responsibilities:

 

  • Designing and implementing fine-tuned production ready data/ML pipelines in Hadoop platform.
  • Driving optimization, testing and tooling to improve quality.
  • Reviewing and approving high level & amp; detailed design to ensure that the solution delivers to the business needs and aligns to the data & analytics architecture principles and roadmap.
  • Understanding business requirements and solution design to develop and implement solutions that adhere to big data architectural guidelines and address business requirements.
  • Following proper SDLC (Code review, sprint process).
  • Identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, etc.
  • Building robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users.
  • Understanding various data security standards and using secure data security tools to apply and adhere to the required data controls for user access in the Hadoop platform.
  • Supporting and contributing to development guidelines and standards for data ingestion.
  • Working with a data scientist and business analytics team to assist in data ingestion and data related technical issues.
  • Designing and documenting the development & deployment flow.

 

Requirements:

 

  • Experience in developing rest API services using one of the Scala frameworks.
  • Ability to troubleshoot and optimize complex queries on the Spark platform
  • Expert in building and optimizing ‘big data’ data/ML pipelines, architectures and data sets.
  • Knowledge in modelling unstructured to structured data design.
  • Experience in Big Data access and storage techniques.
  • Experience in doing cost estimation based on the design and development.
  • Excellent debugging skills for the technical stack mentioned above which even includes analyzing server logs and application logs.
  • Highly organized, self-motivated, proactive, and ability to propose best design solutions.
  • Good time management and multitasking skills to work to deadlines by working independently and as a part of a team.

 

Read more
Job posted by
Saida Jabbar

Scala Developer

at its customers successfully navigate their digital transform

Agency job
via HyrHub
Scala
Java
Spark
Amazon Web Services (AWS)
Amazon EC2
icon
Bengaluru (Bangalore)
icon
5 - 11 yrs
icon
₹15L - ₹25L / yr

Job Requirements :

- Define, implement and validate solution frameworks and architecture patterns for data modeling, data integration, processing, reporting, analytics and visualization using leading cloud, big data, open-source and other enterprise technologies.

- Develop scalable data and analytics solutions leveraging standard platforms, frameworks, patterns and full stack development skills.

- Analyze, characterize and understand data sources, participate in design discussions and provide guidance related to database technology best practices.

- Write tested, robust code that can be quickly moved into production

Responsibilities :

- Experience with distributed data processing and management systems.

- Experience with cloud technologies including Spark SQL, Java/ Scala, HDFS, AWS EC2, AWS S3, etc.

- Familiarity with leveraging and modifying open source libraries to build custom frameworks.

Primary Technical Skills :
- Spark SQL, Java/ Scala, Sbt/ Maven/ Gradle, HDFS, Hive, AWS(EC2, S3, SQS, EMR, Glue Scripts, Lambda, Step Functions), IntelliJ IDE, JIRA, Git, Bitbucket/GitLab, Linux, Oozie.


Notice Period - Max  30 -45 days only
Read more
Job posted by
Shwetha Naik

AI Engineer

at StatusNeo

Founded 2020  •  Products & Services  •  100-1000 employees  •  Profitable
Artificial Intelligence (AI)
Amazon Web Services (AWS)
Windows Azure
Hadoop
Scala
Python
Google Cloud Platform (GCP)
postgres
icon
Gurugram, Hyderabad, Bengaluru (Bangalore)
icon
1 - 3 yrs
icon
₹3L - ₹12L / yr


·       Build data products and processes alongside the core engineering and technology team.

·       Collaborate with senior data scientists to curate, wrangle, and prepare data for use in their advanced analytical models

·       Integrate data from a variety of sources, assuring that they adhere to data quality and accessibility standards

·       Modify and improve data engineering processes to handle ever larger, more complex, and more types of data sources and pipelines

·       Use Hadoop architecture and HDFS commands to design and optimize data queries at scale

·       Evaluate and experiment with novel data engineering tools and advises information technology leads and partners about new capabilities to determine optimal solutions for particular technical problems or designated use cases .
Read more
Job posted by
Alex P

SQL Developer

at Datametica Solutions Private Limited

Founded 2013  •  Products & Services  •  100-1000 employees  •  Profitable
SQL
Linux/Unix
Shell Scripting
SQL server
PL/SQL
Data Warehouse (DWH)
Big Data
Hadoop
icon
Pune
icon
2 - 6 yrs
icon
₹3L - ₹15L / yr

Datametica is looking for talented SQL engineers who would get training & the opportunity to work on Cloud and Big Data Analytics.

 

Mandatory Skills:

  • Strong in SQL development
  • Hands-on at least one scripting language - preferably shell scripting
  • Development experience in Data warehouse projects

Opportunities:

  • Selected candidates will be provided training opportunities on one or more of the following: Google Cloud, AWS, DevOps Tools, Big Data technologies like Hadoop, Pig, Hive, Spark, Sqoop, Flume, and KafkaWould get a chance to be part of the enterprise-grade implementation of Cloud and Big Data systems
  • Will play an active role in setting up the Modern data platform based on Cloud and Big Data
  • Would be part of teams with rich experience in various aspects of distributed systems and computing
Read more
Job posted by
Nikita Aher

Senior Data Engineer

at Bookr Inc

Founded 2019  •  Products & Services  •  20-100 employees  •  Raised funding
Big Data
Hadoop
Spark
Data engineering
Data Warehouse (DWH)
ETL
EMR
Amazon Redshift
PostgreSQL
SQL
Scala
Java
Python
airflow
icon
Remote, Chennai, Bengaluru (Bangalore)
icon
4 - 7 yrs
icon
₹15L - ₹35L / yr

In this role you'll get.

  • Being part of core team member for data platform, setup platform foundation while adhering all required quality standards and design patterns
  • Write efficient and quality code that can scale
  • Adopt Bookr quality standards, recommend process standards and best practices
  • Research, learn & adapt new technologies to solve problems & improve existing solutions
  • Contribute to engineering excellence backlog
  • Identify performance issues
  • Effective code and design reviews
  • Improve reliability of overall production system by proactively identifying patterns of failure
  • Leading and mentoring junior engineers by example
  • End-to-end ownership of stories (including design, serviceability, performance, failure handling)
  • Strive hard to provide the best experience to anyone using our products
  • Conceptualise innovative and elegant solutions to solve challenging big data problems
  • Engage with Product Management and Business to drive the agenda, set your priorities and deliver awesome products
  • Adhere to company policies, procedures, mission, values, and standards of ethics and integrity

 

On day one we'll expect you to.

  • B. E/B. Tech from a reputed institution
  • Minimum 5 years of software development experience and at least a year experience in leading/guiding people
  • Expert coding skills in Python/PySpark or Java/Scala
  • Deep understanding in Big Data Ecosystem - Hadoop and Spark
  • Must have project experience with Spark
  • Ability to independently troubleshoot Spark jobs
  • Good understanding of distributed systems
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Expert hands on experience with RDBMS
  • Fast learner and quickly adapt to new technologies
  • Prefer individuals with high ownership and commitment
  • Ability to work independently as well as working collaboratively in a team

 

Added bonuses you have.

  • Hands on experience with EMR/Glue/Data bricks
  • Hand on experience with Airflow
  • Hands on experience with AWS Big Data ecosystem

 

We are looking for passionate Engineers who are always hungry for challenging problems. We believe in creating opportunistic, yet balanced, work environment for savvy, entrepreneurial tech individuals. We are thriving on remote work with team working across multiple timezones.

 

 

  • Flexible hours & Remote work - We are a results focused bunch, so we encourage you to work whenever and wherever you feel most creative and focused.
  • Unlimited PTOWe want you to feel free to recharge your batteries when you need it!
  • Stock Options - Opportunity to participate in Company stock plan
  • Flat hierarchy - Team leaders at your fingertips
  • BFC(Stands for bureaucracy-free company). We're action oriented and don't bother with dragged-out meetings or pointless admin exercises - we'd rather get our hands dirty!
  • Working along side Leaders - You being part of core team, will give you opportunity to directly work with founding and management team

 

Read more
Job posted by
Nimish Mehta

Big Data/Java Programming

at Dailyhunt

Founded 2007  •  Product  •  500-1000 employees  •  Raised funding
Java
Big Data
Hadoop
Pig
Apache Hive
MapReduce
Elastic Search
MongoDB
Analytics
Scalability
Leadership
Software engineering
Data Analytics
Data domain
Programming
Apache Hadoop
Apache Pig
Communication Skills
icon
Bengaluru (Bangalore)
icon
3 - 9 yrs
icon
₹3L - ₹9L / yr
What You'll Do :- Develop analytic tools, working on BigData and Distributed Environment. Scalability will be the key- Provide architectural and technical leadership on developing our core Analytic platform- Lead development efforts on product features on Java- Help scale our mobile platform as we experience massive growthWhat we Need :- Passion to build analytics & personalisation platform at scale- 3 to 9 years of software engineering experience with product based company in data analytics/big data domain- Passion for the Designing and development from the scratch.- Expert level Java programming and experience leading full lifecycle of application Dev.- Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage- Strong communication skills, verbal and written
Read more
Job posted by
khushboo jain

Senior Data Engineer

at Data Team

Agency job
via Oceanworld
Big Data
Data engineering
Hadoop
data engineer
Apache Hive
Apache Kafka
icon
Remote only
icon
8 - 12 yrs
icon
₹10L - ₹20L / yr
Senior Data Engineer (SDE)

(Hadoop, HDFS, Kafka, Spark, Hive)

Overall Experience - 8 to 12 years

Relevant exp on Big data - 3+ years in above

Salary: Max up-to 20LPA 

Job location - Chennai / Bangalore / 

Notice Period - Immediate joiner / 15-to-20-day Max 

The Responsibilities of The Senior Data Engineer Are:

- Requirements gathering and assessment

- Breakdown complexity and translate requirements to specification artifacts and story boards to build towards, using a test-driven approach

- Engineer scalable data pipelines using big data technologies including but not limited to Hadoop, HDFS, Kafka, HBase, Elastic

- Implement the pipelines using execution frameworks including but not limited to MapReduce, Spark, Hive, using Java/Scala/Python for application design.

- Mentoring juniors in a dynamic team setting

- Manage stakeholders with proactive communication upholding TheDataTeam's brand and values

A Candidate Must Have the Following Skills:

- Strong problem-solving ability

- Excellent software design and implementation ability

- Exposure and commitment to agile methodologies

- Detail oriented with willingness to proactively own software tasks as well as management tasks, and see them to completion with minimal guidance

- Minimum 8 years of experience

- Should have experience in full life-cycle of one big data application

- Strong understanding of various storage formats (ORC/Parquet/Avro)

- Should have hands on experience in one of the Hadoop distributions (Hortoworks/Cloudera/MapR)

- Experience in at least one cloud environment (GCP/AWS/Azure)

- Should be well versed with at least one database (MySQL/Oracle/MongoDB/Postgres)

- Bachelor's in Computer Science, and preferably, a Masters as well - Should have good code review and debugging skills

Additional skills (Good to have):

- Experience in Containerization (docker/Heroku)

- Exposure to microservices

- Exposure to DevOps practices - Experience in Performance tuning of big data applications
Read more
Job posted by
Chandan J

Data Engineer

at Japan Based Leading Company

Agency job
via Bleuming Technology
Big Data
Amazon Web Services (AWS)
Java
Python
MySQL
DevOps
Hadoop
icon
Bengaluru (Bangalore)
icon
3 - 10 yrs
icon
₹0L - ₹20L / yr
A data engineer with AWS Cloud infrastructure experience to join our Big Data Operations team. This role will provide advanced operations support, contribute to automation and system improvements, and work directly with enterprise customers to provide excellent customer service.
The candidate,
1. Must have a very good hands-on technical experience of 3+ years with JAVA or Python
2. Working experience and good understanding of AWS Cloud; Advanced experience with IAM policy and role management
3. Infrastructure Operations: 5+ years supporting systems infrastructure operations, upgrades, deployments using Terraform, and monitoring
4. Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR
5. Knowledge on PostgreSQL/MySQL/Dynamo DB backend operations
6. DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Jenkins)
7. Version Control: Working experience with one or more version control platforms like GitHub or GitLab
8. Knowledge on AWS Quick sight reporting
9. Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, AWS CloudTrail, Datadog and Elastic Search
10. Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB) and high availability architecture
11. Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. Familiar with penetration testing and scan tools for remediation of security vulnerabilities.
12. Demonstrated successful experience learning new technologies quickly
WHAT WILL BE THE ROLES AND RESPONSIBILITIES?
1. Create procedures/run books for operational and security aspects of AWS platform
2. Improve AWS infrastructure by developing and enhancing automation methods
3. Provide advanced business and engineering support services to end users
4. Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs
5. Research and deploy new tools and frameworks to build a sustainable big data platform
6. Assist with creating programs for training and onboarding for new end users
7. Lead Agile/Kanban workflows and team process work
8. Troubleshoot issues to resolve problems
9. Provide status updates to Operations product owner and stakeholders
10. Track all details in the issue tracking system (JIRA)
11. Provide issue review and triage problems for new service/support requests
12. Use DevOps automation tools, including Jenkins build jobs
13. Fulfil any ad-hoc data or report request queries from different functional groups
Read more
Job posted by
Ameem Iqubal

Senior Developer - Frontend

at Elucidata Corporation

Founded 2015  •  Products & Services  •  100-1000 employees  •  Raised funding
Big Data
Javascript
AngularJS (1.x)
React.js
icon
Remote, NCR (Delhi | Gurgaon | Noida)
icon
4 - 6 yrs
icon
₹15L - ₹20L / yr
About Elucidata:Our mission is to make data-driven understanding of disease, the default starting point in the drug discovery process. Our products & services further the understanding of the ways in which diseased cells are different from healthy ones. This understanding helps scientists discover new drugs in a more effective manner and complements the move towards personalization.Biological big data will outpace data generated by YouTube and Twitter by 10x in the next 7 yrs. Our platform Polly will enable scientists to process different kinds of biological data and generate insights from them to accelerate drug discovery. Polly is already being used at premier biopharma companies like Pfizer and Agios; and academic labs at Yale, MIT, Washington University.We are looking for teammates who think out-of-the-box and are not satisfied with quick fixes or canned solutions to our industry’s most challenging problems. If you seek an intellectually stimulating environment where you can have a major impact on a critically important industry, we’d like to talk to you.About RoleWe are looking for engineers who want to build data rich applications and love the end-to-end product journey from understanding customer needs to the final product.Key Responsibilities- Developing web applications to visualize and process scientific data. - Interacting with Product, Design and Engineering teams to spec, build, test and deploy new features. - Understanding user needs and the science behind it.- Mentoring junior developersRequirements- Minimum 3-4 years of experience working in web development- In-depth knowledge of JavaScript- Hands-on experience with modern frameworks (Angular, React) - Sound programming and computer science fundamentals- Good understanding of web architecture and single page applications You might be a great cultural fit for Elucidata if..- You are passionate for Science.- You are a self-learner who wants to keep learning everyday. - You regard your code as your craft that you want to keep honing. - You like to work hard to solve big challenges and enjoy the process of breaking down a problem one blow at a time. - You love science and can't stop being the geek at a party. Of course you party harder than everybody else there.
Read more
Job posted by
Bhuvnesh Sharma
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at NSEIT?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort