Cutshort logo
NSEIT logo
ELK Developer at NSEIT @Bangalore
ELK Developer at NSEIT @Bangalore
NSEIT's logo

ELK Developer at NSEIT @Bangalore

Akansha Singh's profile picture
Posted by Akansha Singh
3 - 5 yrs
₹6L - ₹20L / yr
Bengaluru (Bangalore)
Skills
ELK
ELK Stack
skill iconElastic Search
Logstash
Kibana
API
Linix
• Introduction: ELK(Elasticsearch, Logstash, and Kibana) stack. ELK (Elasticsearch, Logstash and Kibana) stack is an end-to-end stack that delivers actionable insights in real time from almost any type of structured and unstructured data source. ELK Stack is the most popular log management platform.

• Responsibilities:
o Should be able to work with API, shards etc in Elasticsearch.
o Write parser in Logstash
o Create Dashboards in Kibana


• Mandatory Experience.
o Must have very good understanding of Log Analytics
o Hands on experience in Elasticsearch, logstash & Kibana should be at expert level
o Elasticsearch : Should be able to write Kibana API
o Logstash : Should be able to write parsers.
o Kibana : Create different visualization and dashboards according to the Client needs
o Scripts : Should be able to write scripts in linux.
Read more
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos

About NSEIT

Founded :
1999
Type
Size :
100-1000
Stage :
Profitable
About

NSEIT is a global technology firm with a focus on the financial services industry. We are a vertical specialist organization with domain expertise and technology focus aligned to the needs of financial institutions. We offer Application Services, IT Enabled Services (Assessments), Testing Center of Excellence, Infrastructure Services, Integrated Security Response Center and Analytics as a Service primarily for the BFSI segment. 

We are a 100% subsidiary of National Stock Exchange of India Limited (NSEIL). Being a part of the stock exchange our solutions inherently encapsulate industry strength, security, scalability, reliability and performance features.

Our focus on domain and key technologies enables us to use new trends in digital technologies like cloud computing, mobility and analytics while building solutions for our customers.

We are passionate about building innovative, futuristic and robust solutions for our customers. We have been assessed at Maturity Level 5 in Capability Maturity Model Integration for Development (CMMI® - DEV) v 1.3. We are also certified for ISO 9001:2015 for providing high quality products and services, and ISO 27001:2013 for our Information Security Management Systems.

Our offices are located in India and the US.

Read more
Connect with the team
Profile picture
Akansha Singh
Profile picture
Vishal Pednekar
Profile picture
Raju Soni
Profile picture
Manish Buswala
Company social profiles
bloglinkedinfacebook

Similar jobs

Chennai
5 - 8 yrs
₹6L - ₹7L / yr
Relational Database (RDBMS)
NOSQL Databases
MySQL
MS SQLServer
SQL server
+13 more

Database Architect

5 - 6 Years

Good Knowledge in Relation and Non-Relational Database

To write Complex Queries and Identify problematic queries and provide a Solution

Good Hands on database tools

Experience in Both SQL and NON SQL Database like SQL Server, PostgreSQL, Mango DB, Maria DB. Etc.

Worked on Data Model Preparation & Structuring Database etc.

Read more
Hyderabad
5 - 12 yrs
₹10L - ₹35L / yr
Analytics
skill iconKubernetes
Apache Kafka
skill iconData Analytics
skill iconPython
+3 more
  • 3+ years of industry experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd and streaming databases like druid
  • Strong industry expertise with containerization technologies including kubernetes, docker-compose
  • 2+ years of industry in experience in developing scalable data ingestion processes and ETLs
  • Experience with cloud platform services such as AWS, Azure or GCP especially with EKS, Managed Kafka
  • Experience with scripting languages. Python experience highly desirable.
  • 2+ Industry experience in python
  • Experience with popular modern web frameworks such as Spring boot, Play framework, or Django
  • Demonstrated expertise of building cloud native applications
  • Experience in administering (including setting up, managing, monitoring) data processing pipelines (both streaming and batch) using frameworks such as Kafka, ELK Stack, Fluentd
  • Experience in API development using Swagger
  • Strong expertise with containerization technologies including kubernetes, docker-compose
  • Experience with cloud platform services such as AWS, Azure or GCP.
  • Implementing automated testing platforms and unit tests
  • Proficient understanding of code versioning tools, such as Git
  • Familiarity with continuous integration, Jenkins
Responsibilities
  • Design and Implement Large scale data processing pipelines using Kafka, Fluentd and Druid
  • Assist in dev ops operations
  • Develop data ingestion processes and ETLs
  • Design and Implement APIs
  • Assist in dev ops operations
  • Identify performance bottlenecks and bugs, and devise solutions to these problems
  • Help maintain code quality, organization, and documentation
  • Communicate with stakeholders regarding various aspects of solution.
  • Mentor team members on best practices
Read more
xoxoday
at xoxoday
2 recruiters
Agency job
via Jobdost by Mamatha A
Bengaluru (Bangalore)
7 - 9 yrs
₹15L - ₹15L / yr
MySQL
skill iconMongoDB
Data modeling
API
Apache Kafka
+2 more

What is the role?

You will be responsible for building and maintaining highly scalable data infrastructure for our cloud-hosted SAAS product. You will work closely with the Product Managers and Technical team to define and implement data pipelines for customer-facing and internal reports.

Key Responsibilities

  • Design and develop resilient data pipelines.
  • Write efficient queries to fetch data from the report database.
  • Work closely with application backend engineers on data requirements for their stories.
  • Designing and developing report APIs for the front end to consume.
  • Focus on building highly available, fault-tolerant report systems.
  • Constantly improve the architecture of the application by clearing the technical backlog. 
  • Adopt a culture of learning and development to constantly keep pace with and adopt new technolgies.

What are we looking for?

An enthusiastic individual with the following skills. Please do not hesitate to apply if you do not match all of it. We are open to promising candidates who are passionate about their work and are team players.

  • Education - BE/MCA or equivalent
  • Overall 8+ years of experience
  • Expert level understanding of database concepts and BI.
  • Well verse in databases such as MySQL, MongoDB and hands on experience in creating data models. 
  • Must have designed and implemented low latency data warehouse systems.
  • Must have strong understanding of Kafka and related systems.
  • Experience in clickhouse database preferred.
  • Must have good knowledge of APIs and should be able to build interfaces for frontend engineers.
  • Should be innovative and communicative in approach
  • Will be responsible for functional/technical track of a project

Whom will you work with?

You will work with a top-notch tech team, working closely with the CTO and product team.  

What can you look for?

A wholesome opportunity in a fast-paced environment that will enable you to juggle between concepts, yet maintain the quality of content, interact, and share your ideas and have loads of learning while at work. Work with a team of highly talented young professionals and enjoy the benefits of being at Xoxoday.

We are

Xoxoday is a rapidly growing fintech SaaS firm that propels business growth while focusing on human motivation. Backed by Giift and Apis Partners Growth Fund II, Xoxoday offers a suite of three products - Plum, Empuls, and Compass. Xoxoday works with more than 2000 clients across 10+ countries and over 2.5 million users. Headquartered in Bengaluru, Xoxoday is a 300+ strong team with four global offices in San Francisco, Dublin, Singapore, New Delhi.

Way forward

We look forward to connecting with you. As you may take time to review this opportunity, we will wait for a reasonable time of around 3-5 days before we screen the collected applications and start lining up job discussions with the hiring manager. We however assure you that we will attempt to maintain a reasonable time window for successfully closing this requirement. The candidates will be kept informed and updated on the feedback and application status.

 

Read more
Open Eyes software solutions pvt
Vadodara
2 - 7 yrs
₹2L - ₹7L / yr
skill iconAmazon Web Services (AWS)
skill iconMachine Learning (ML)
skill iconPython
API
Algorithms
+2 more
Key Skills
o Convert machine learning models into APIs for applications accessibility
o Running machine learning tests and experiments
o Implementing appropriate ML algorithms
o Creating machine learning models and retraining systems
o Study and transform data science prototypes
o Design machine learning systems
o Research and implement appropriate ML algorithms and tools
o Train and retrain systems when necessary
o Test and deploy models
o Use AI to empower the company with novel capabilities
o Designing and developing machine learning and deep learning system
o Outstanding analytical and problem-solving skills
• Alexa
o Excellent in Python programming
o Experience with AWS Lamda
o Experience with Alexa skills
o Alexa skill directives
• Google
o Excellent in NodeJS programming
o Experience with GCP - Dialog Flow and Actions on Google
o Using built-in intents and developing custom intents
o API integration and Postman knowledge
Read more
A2Tech Consultants
at A2Tech Consultants
3 recruiters
Dhaval B
Posted by Dhaval B
Pune
4 - 12 yrs
₹6L - ₹15L / yr
Data engineering
Data Engineer
ETL
Spark
Apache Kafka
+5 more
We are looking for a smart candidate with:
  • Strong Python Coding skills and OOP skills
  • Should have worked on Big Data product Architecture
  • Should have worked with any one of the SQL-based databases like MySQL, PostgreSQL and any one of
  • NoSQL-based databases such as Cassandra, Elasticsearch etc.
  • Hands on experience on frameworks like Spark RDD, DataFrame, Dataset
  • Experience on development of ETL for data product
  • Candidate should have working knowledge on performance optimization, optimal resource utilization, Parallelism and tuning of spark jobs
  • Working knowledge on file formats: CSV, JSON, XML, PARQUET, ORC, AVRO
  • Good to have working knowledge with any one of the Analytical Databases like Druid, MongoDB, Apache Hive etc.
  • Experience to handle real-time data feeds (good to have working knowledge on Apache Kafka or similar tool)
Key Skills:
  • Python and Scala (Optional), Spark / PySpark, Parallel programming
Read more
jhjkhhk
Agency job
via CareerBabu by Tanisha Takkar
Bengaluru (Bangalore)
2 - 5 yrs
₹10L - ₹40L / yr
Apache Spark
Big Data
skill iconJava
Spring
Data Structures
+5 more
  • Owns the end to end implementation of the assigned data processing components/product features  i.e. design, development, deployment, and testing of the data processing components and associated flows conforming to best coding practices 

  • Creation and optimization of data engineering pipelines for analytics projects. 

  • Support data and cloud transformation initiatives 

  • Contribute to our cloud strategy based on prior experience 

  • Independently work with all stakeholders across the organization to deliver enhanced functionalities 

  • Create and maintain automated ETL processes with a special focus on data flow, error recovery, and exception handling and reporting 

  • Gather and understand data requirements, work in the team to achieve high-quality data ingestion and build systems that can process the data, transform the data 

  • Be able to comprehend the application of database index and transactions 

  • Involve in the design and development of a Big Data predictive analytics SaaS-based customer data platform using object-oriented analysis, design and programming skills, and design patterns 

  • Implement ETL workflows for data matching, data cleansing, data integration, and management 

  • Maintain existing data pipelines, and develop new data pipeline using big data technologies 

  • Responsible for leading the effort of continuously improving reliability, scalability, and stability of microservices and platform

Read more
Aptus Data LAbs
at Aptus Data LAbs
1 recruiter
Merlin Metilda
Posted by Merlin Metilda
Bengaluru (Bangalore)
5 - 10 yrs
₹6L - ₹15L / yr
Data engineering
Big Data
Hadoop
Data Engineer
Apache Kafka
+5 more

Roles & Responsibilities

  1. Proven experience with deploying and tuning Open Source components into enterprise ready production tooling Experience with datacentre (Metal as a Service – MAAS) and cloud deployment technologies (AWS or GCP Architect certificates required)
  2. Deep understanding of Linux from kernel mechanisms through user space management
  3. Experience on CI/CD (Continuous Integrations and Deployment) system solutions (Jenkins).
  4. Using Monitoring tools (local and on public cloud platforms) Nagios, Prometheus, Sensu, ELK, Cloud Watch, Splunk, New Relic etc. to trigger instant alerts, reports and dashboards.  Work closely with the development and infrastructure teams to analyze and design solutions with four nines (99.99%) up-time, globally distributed, clustered, production and non-production virtualized infrastructure. 
  5. Wide understanding of IP networking as well as data centre infrastructure

Skills

  1. Expert with software development tools and sourcecode management, understanding, managing issues, code changes and grouping them into deployment releases in a stable and measurable way to maximize production Must be expert at developing and using ansible roles and configuring deployment templates with jinja2.
  2. Solid understanding of data collection tools like Flume, Filebeat, Metricbeat, JMX Exporter agents.
  3. Extensive experience operating and tuning the kafka streaming data platform, specifically as a message queue for big data processing
  4. Strong understanding and must have experience:
  5. Apache spark framework, specifically spark core and spark streaming, 
  6. Orchestration platforms, mesos and kubernetes, 
  7. Data storage platforms, elasticstack, carbon, clickhouse, cassandra, ceph, hdfs
  8. Core presentation technologies kibana, and grafana.
  9. Excellent scripting and programming skills (bash, python, java, go, rust). Must have previous experience with “rust” in order to support, improve in house developed products

Certification

Red Hat Certified Architect certificate or equivalent required CCNA certificate required 3-5 years of experience running open source big data platforms

Read more
Pune
5 - 8 yrs
₹10L - ₹17L / yr
skill iconPython
Big Data
skill iconAmazon Web Services (AWS)
Windows Azure
Google Cloud Platform (GCP)
+3 more
  • Must have 5-8 years of experience in handling data
  • Must have the ability to interpret large amounts of data and to multi-task
  • Must have strong knowledge of and experience with programming (Python), Linux/Bash scripting, databases(SQL, etc)
  • Must have strong analytical and critical thinking to resolve business problems using data and tech
  •  Must have domain familiarity and interest of – Cloud technologies (GCP/Azure Microsoft/ AWS Amazon), open-source technologies, Enterprise technologies
  • Must have the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
  • Must have good communication skills
  • Working knowledge/exposure to ElasticSearch, PostgreSQL, Athena, PrestoDB, Jupyter Notebook
Read more
ukraine
3 - 10 yrs
₹15L - ₹30L / yr
Big Data
skill iconElastic Search
Hadoop
Apache Kafka
Apache Hive
Responsibility: • Studied Computer Science, • 5+ years of software development experience, • Must have experience in Elasticsearch (2+ years experience is preferable), • Skills in Java, Python or Scala, • Passionate about learning big data, data mining and data analysis technologies, • Self-motivated; independent, organized and proactive; highly responsive, flexible, and adaptable when working across multiple teams, • Strong SQL skills, including query optimization are required. • Experience working with large, complex datasets is required, • Experience with recommendation systems and data warehouse technologies are preferred, • You possess an intense curiosity about data and a strong commitment to practical problem-solving, • Creative in thinking data centric products which will be used in online customer behaviors and marketing, • Build systems to pull meaningful insights from our data platform, • Integrate our analytics platform internally across products and teams, • Focus on performance, throughput, latency and drive these throughout our architecture. Bonuses -Experience with big data architectures such as Lambda Architecture. -Experience working with big data technologies (like Hadoop, Java Map/Reduce, Hive, Spark SQL), real-time processing frameworks (like Spark Streaming, Storm, AWS Kinesis). -Proficiency in key-value stores such as : HBase/Cassandra, Redis, Riak and MongoDB -Experience with AWS EMR
Read more
Poker Yoga
at Poker Yoga
3 recruiters
Anuj Kumar Kodam
Posted by Anuj Kumar Kodam
Bengaluru (Bangalore)
2 - 4 yrs
₹13L - ₹18L / yr
skill iconElastic Search
skill iconMongoDB
NOSQL Databases
skill iconRedis
Relational Database (RDBMS)
At Poker Yoga we aim to make poker a tool towards self transformation. By providing the necessary tools to improve his skill, necessary learning frame work to bring skill to the core of his game approach and experiences to enhance his perception. We are looking at passionate coders who love building products that speak for themselves. It's an invitation to join a family, not a company. Looking forward to work with you!
Read more
Why apply to jobs via Cutshort
people_solving_puzzle
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
people_verifying_people
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
ai_chip
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
21,01,133
Matches delivered
37,12,187
Network size
15,000
Companies hiring
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
companies logo
companies logo
companies logo
companies logo
companies logo
Get to hear about interesting companies hiring right now
Company logo
Company logo
Company logo
Company logo
Company logo
Linkedin iconFollow Cutshort
Users love Cutshort
Read about what our users have to say about finding their next opportunity on Cutshort.
Subodh Popalwar's profile image

Subodh Popalwar

Software Engineer, Memorres
For 2 years, I had trouble finding a company with good work culture and a role that will help me grow in my career. Soon after I started using Cutshort, I had access to information about the work culture, compensation and what each company was clearly offering.
Companies hiring on Cutshort
companies logos