ELK Developer at NSEIT @Bangalore

at NSEIT

DP
Posted by Akansha Singh
icon
Bengaluru (Bangalore)
icon
3 - 5 yrs
icon
₹6L - ₹20L / yr
icon
Full time
Skills
ELK
ELK Stack
Elastic Search
Logstash
Kibana
API
Linix
• Introduction: ELK(Elasticsearch, Logstash, and Kibana) stack. ELK (Elasticsearch, Logstash and Kibana) stack is an end-to-end stack that delivers actionable insights in real time from almost any type of structured and unstructured data source. ELK Stack is the most popular log management platform.

• Responsibilities:
o Should be able to work with API, shards etc in Elasticsearch.
o Write parser in Logstash
o Create Dashboards in Kibana


• Mandatory Experience.
o Must have very good understanding of Log Analytics
o Hands on experience in Elasticsearch, logstash & Kibana should be at expert level
o Elasticsearch : Should be able to write Kibana API
o Logstash : Should be able to write parsers.
o Kibana : Create different visualization and dashboards according to the Client needs
o Scripts : Should be able to write scripts in linux.
Read more

About NSEIT

NSEIT is a global technology firm with a focus on the financial services industry. We are a vertical specialist organization with domain expertise and technology focus aligned to the needs of financial institutions. We offer Application Services, IT Enabled Services (Assessments), Testing Center of Excellence, Infrastructure Services, Integrated Security Response Center and Analytics as a Service primarily for the BFSI segment. 

We are a 100% subsidiary of National Stock Exchange of India Limited (NSEIL). Being a part of the stock exchange our solutions inherently encapsulate industry strength, security, scalability, reliability and performance features.

Our focus on domain and key technologies enables us to use new trends in digital technologies like cloud computing, mobility and analytics while building solutions for our customers.

We are passionate about building innovative, futuristic and robust solutions for our customers. We have been assessed at Maturity Level 5 in Capability Maturity Model Integration for Development (CMMI® - DEV) v 1.3. We are also certified for ISO 9001:2015 for providing high quality products and services, and ISO 27001:2013 for our Information Security Management Systems.

Our offices are located in India and the US.

Read more
Founded
1999
Type
Products & Services
Size
100-1000 employees
Stage
Profitable
View full company details
Why apply to jobs via Cutshort
Personalized job matches
Stop wasting time. Get matched with jobs that meet your skills, aspirations and preferences.
Verified hiring teams
See actual hiring teams, find common social connections or connect with them directly. No 3rd party agencies here.
Move faster with AI
We use AI to get you faster responses, recommendations and unmatched user experience.
2101133
Matches delivered
3712187
Network size
15000
Companies hiring

Similar jobs

Senior Software Engineer

at Digital Banking Firm

Agency job
via Qrata
Apache Kafka
Hadoop
Spark
Apache Hadoop
Big Data
ETL
Java
Javascript
Elastic Search
Apache Spark
icon
Bengaluru (Bangalore)
icon
5 - 10 yrs
icon
₹20L - ₹40L / yr
Location - Bangalore (Remote for now)
 
Designation - Sr. SDE (Platform Data Science)
 
About Platform Data Science Team

The Platform Data Science team works at the intersection of data science and engineering. Domain experts develop and advance platforms, including the data platforms, machine learning platform, other platforms for Forecasting, Experimentation, Anomaly Detection, Conversational AI, Underwriting of Risk, Portfolio Management, Fraud Detection & Prevention and many more. We also are the Data Science and Analytics partners for Product and provide Behavioural Science insights across Jupiter.
 
About the role:

We’re looking for strong Software Engineers that can combine EMR, Redshift, Hadoop, Spark, Kafka, Elastic Search, Tensorflow, Pytorch and other technologies to build the next generation Data Platform, ML Platform, Experimentation Platform. If this sounds interesting we’d love to hear from you!
This role will involve designing and developing software products that impact many areas of our business. The individual in this role will have responsibility help define requirements, create software designs, implement code to these specifications, provide thorough unit and integration testing, and support products while deployed and used by our stakeholders.

Key Responsibilities:

Participate, Own & Influence in architecting & designing of systems
Collaborate with other engineers, data scientists, product managers
Build intelligent systems that drive decisions
Build systems that enable us to perform experiments and iterate quickly
Build platforms that enable scientists to train, deploy and monitor models at scale
Build analytical systems that drives better decision making
 

Required Skills:

Programming experience with at least one modern language such as Java, Scala including object-oriented design
Experience in contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems
Bachelor’s degree in Computer Science or related field
Computer Science fundamentals in object-oriented design
Computer Science fundamentals in data structures
Computer Science fundamentals in algorithm design, problem solving, and complexity analysis
Experience in databases, analytics, big data systems or business intelligence products:
Data lake, data warehouse, ETL, ML platform
Big data tech like: Hadoop, Apache Spark
Read more
Job posted by
Prajakta Kulkarni

Data Scientist

at A stealth mode realty tech start-up

Agency job
via Qrata
Data Science
Natural Language Processing (NLP)
R Programming
Python
SQL
Algorithms
API
icon
Bengaluru (Bangalore)
icon
1 - 5 yrs
icon
₹10L - ₹32L / yr
A good understanding of the fundamentals of data science/algorithms or software
engineering
2. Preferably should have done some project or internship related to the field
3. Knowledge of SQL is a plus
4. A deep desire to learn new things and be a part of a vibrant start-up.
5. You will have a lot of freehand and this comes with immense responsibility - so it
is expected that you will be willing to master new things that come along!

Job Description:
1. Design and build a pipeline to train models for NLP problems like Classification,
NER
2. Develop APIs that showcase our models' capabilities and enable third-party
integrations
3. Work across a microservices architecture that processes thousands of
documents per day.
Read more
Job posted by
Prajakta Kulkarni
Natural Language Processing (NLP)
PyTorch
Python
Java
Solr
Elastic Search
icon
Bengaluru (Bangalore)
icon
4 - 6 yrs
icon
₹4L - ₹20L / yr
Skill Set:
  • 4+ years of experience Solid understanding of Python, Java and general software development skills (source code management, debugging, testing, deployment etc.).
  • Experience in working with Solr and ElasticSearch Experience with NLP technologies & the handling of unstructured text Detailed understanding of text pre-processing and normalisation techniques such as tokenisation, lemmatisation, stemming, POS tagging etc.
  • Prior experience in implementation of traditional ML solutions - classification, regression or clustering problem Expertise in text-analytics - Sentiment Analysis, Entity Extraction, Language modelling - and associated sequence learning models ( RNN, LSTM, GRU).
  • Comfortable working with deep-learning libraries (eg. PyTorch)
  • Candidate can even be a fresher with 1 or 2 years of experience IIIT, IIIT, Bits Pilani, top 5 local colleges are preferred colleges and universities.
  • A Masters candidate in machine learning.
  • Can source candidates from Mu Sigma and Manthan.
Read more
Job posted by
HyreSpree Team

Software Architect/CTO

at Blenheim Chalcot IT Services India Pvt Ltd

SQL Azure
ADF
Azure data factory
Azure Datalake
Azure Databricks
ETL
PowerBI
Apache Synapse
Data Warehouse (DWH)
API
SFTP
JSON
Java
Python
C#
Javascript
Amazon Web Services (AWS)
Google Cloud Platform (GCP)
icon
Mumbai
icon
5 - 8 yrs
icon
₹25L - ₹30L / yr
As a hands-on Data Architect, you will be part of a team responsible for building enterprise-grade
Data Warehouse and Analytics solutions that aggregate data across diverse sources and data types
including text, video and audio through to live stream and IoT in an agile project delivery
environment with a focus on DataOps and Data Observability. You will work with Azure SQL
Databases, Synapse Analytics, Azure Data Factory, Azure Datalake Gen2, Azure Databricks, Azure
Machine Learning, Azure Service Bus, Azure Serverless (LogicApps, FunctionApps), Azure Data
Catalogue and Purview among other tools, gaining opportunities to learn some of the most
advanced and innovative techniques in the cloud data space.
You will be building Power BI based analytics solutions to provide actionable insights into customer
data, and to measure operational efficiencies and other key business performance metrics.
You will be involved in the development, build, deployment, and testing of customer solutions, with
responsibility for the design, implementation and documentation of the technical aspects, including
integration to ensure the solution meets customer requirements. You will be working closely with
fellow architects, engineers, analysts, and team leads and project managers to plan, build and roll
out data driven solutions
Expertise:
Proven expertise in developing data solutions with Azure SQL Server and Azure SQL Data Warehouse (now
Synapse Analytics)
Demonstrated expertise of data modelling and data warehouse methodologies and best practices.
Ability to write efficient data pipelines for ETL using Azure Data Factory or equivalent tools.
Integration of data feeds utilising both structured (ex XML/JSON) and flat schemas (ex CSV,TXT,XLSX)
across a wide range of electronic delivery mechanisms (API/SFTP/etc )
Azure DevOps knowledge essential for CI/CD of data ingestion pipelines and integrations.
Experience with object-oriented/object function scripting languages such as Python, Java, JavaScript, C#,
Scala, etc is required.
Expertise in creating technical and Architecture documentation (ex: HLD/LLD) is a must.
Proven ability to rapidly analyse and design solution architecture in client proposals is an added advantage.
Expertise with big data tools: Hadoop, Spark, Kafka, NoSQL databases, stream-processing systems is a plus.
Essential Experience:
5 or more years of hands-on experience in a data architect role with the development of ingestion,
integration, data auditing, reporting, and testing with Azure SQL tech stack.
full data and analytics project lifecycle experience (including costing and cost management of data
solutions) in Azure PaaS environment is essential.
Microsoft Azure and Data Certifications, at least fundamentals, are a must.
Experience using agile development methodologies, version control systems and repositories is a must.
A good, applied understanding of the end-to-end data process development life cycle.
A good working knowledge of data warehouse methodology using Azure SQL.
A good working knowledge of the Azure platform, it’s components, and the ability to leverage it’s
resources to implement solutions is a must.
Experience working in the Public sector or in an organisation servicing Public sector is a must,
Ability to work to demanding deadlines, keep momentum and deal with conflicting priorities in an
environment undergoing a programme of transformational change.
The ability to contribute and adhere to standards, have excellent attention to detail and be strongly driven
by quality.
Desirables:
Experience with AWS or google cloud platforms will be an added advantage.
Experience with Azure ML services will be an added advantage Personal Attributes
Articulated and clear in communications to mixed audiences- in writing, through presentations and one-toone.
Ability to present highly technical concepts and ideas in a business-friendly language.
Ability to effectively prioritise and execute tasks in a high-pressure environment.
Calm and adaptable in the face of ambiguity and in a fast-paced, quick-changing environment
Extensive experience working in a team-oriented, collaborative environment as well as working
independently.
Comfortable with multi project multi-tasking consulting Data Architect lifestyle
Excellent interpersonal skills with teams and building trust with clients
Ability to support and work with cross-functional teams in a dynamic environment.
A passion for achieving business transformation; the ability to energise and excite those you work with
Initiative; the ability to work flexibly in a team, working comfortably without direct supervision.
Read more
Job posted by
VIJAYAKIRON ABBINENI

Data Engineer

at VIMANA

Founded 2009  •  Product  •  20-100 employees  •  Profitable
Data engineering
Data Engineer
Apache Kafka
Big Data
Java
NodeJS (Node.js)
Elastic Search
Test driven development (TDD)
Python
icon
Remote, Chennai
icon
2 - 5 yrs
icon
₹10L - ₹20L / yr

We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.

 

Responsibilities:

  • You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
  • You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
  • You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
  • You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
  • You will be building Domain Driven APIs as part of a micro-service architecture.
  • You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
  • You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.

 

Requirements:

  • Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
  • 2 to 5 years of product development experience.
  • Experience building applications using Java, NodeJS, or Python.
  • Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
  • Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
  • Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
  • Experience using no-SQL databases like MongoDB or Elasticsearch.
  • Prior experience with container orchestrators like Kubernetes is a plus.
About VIMANA

We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.

Please visit https://govimana.com/ to learn more about what we do.

Why Explore a Career at VIMANA
  • We recognize that our dedicated team members make us successful and we offer competitive salaries.
  • We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
  • You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
  • Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!

VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.

1.Telephonic screening (30 Min )

A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds

2. Technical Rounds

This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.

3. HR Round

Candidate's team and cultural fit will be evaluated during this round

We would proceed with releasing the offer if the candidate clears all the above rounds.

Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.
Read more
Job posted by
Loshy Chandran

Big Data/Java Programming

at Dailyhunt

Founded 2007  •  Product  •  500-1000 employees  •  Raised funding
Java
Big Data
Hadoop
Pig
Apache Hive
MapReduce
Elastic Search
MongoDB
Analytics
Scalability
Leadership
Software engineering
Data Analytics
Data domain
Programming
Apache Hadoop
Apache Pig
Communication Skills
icon
Bengaluru (Bangalore)
icon
3 - 9 yrs
icon
₹3L - ₹9L / yr
What You'll Do :- Develop analytic tools, working on BigData and Distributed Environment. Scalability will be the key- Provide architectural and technical leadership on developing our core Analytic platform- Lead development efforts on product features on Java- Help scale our mobile platform as we experience massive growthWhat we Need :- Passion to build analytics & personalisation platform at scale- 3 to 9 years of software engineering experience with product based company in data analytics/big data domain- Passion for the Designing and development from the scratch.- Expert level Java programming and experience leading full lifecycle of application Dev.- Exp in Analytics, Hadoop, Pig, Hive, Mapreduce, ElasticSearch, MongoDB is an additional advantage- Strong communication skills, verbal and written
Read more
Job posted by
khushboo jain

Data Platform Engineer

at Hypersonix Inc

Founded 2018  •  Product  •  100-500 employees  •  Profitable
Python
Java
Scala
Apache Kafka
Datawarehousing
Data Warehouse (DWH)
Hadoop
Data migration
API
Spark
NOSQL Databases
data engineer
icon
Remote, Bengaluru (Bangalore)
icon
5 - 7 yrs
icon
₹15L - ₹30L / yr
At HypersoniX our platform technology is aimed to solve regular and persistent problem in data platform domain. We’ve established ourselves as a leading developer of innovative software solutions. We’re looking for a highly-skilled Data-Platform engineer to join our program and platform design team. Our ideal candidate will have expert knowledge of software development processes and solid experience in designing/developing/evaluating/troubleshooting data platform and data driven applications If finding issues and fixing them with beautiful, meticulous code are among the talents that make you tick, we’d like to hear from you.

Objectives of this Role:
• Design, and develop creative and innovative frameworks/components for data platforms, as we continue to experience dramatic growth in the usage and visibility of our products
• work closely with data scientist and product owners to come up with better design/development approach for application and platform to scale and serve the needs.
• Examine existing systems, identifying flaws and creating solutions to improve service uptime and time-to-resolve through monitoring and automated remediation
• Plan and execute full software development life cycles (SDLC) for each assigned project, adhering to company standards and expectations Daily and Monthly Responsibilities:
• Design and build tools/frameworks/scripts to automate development, testing deployment, management and monitoring of the company’s 24x7 services and products
• Plan and scale distributed software and applications, applying synchronous and asynchronous design patterns, write code, and deliver with urgency and quality
• Collaborate with global team, producing project work plans and analyzing the efficiency and feasibility of project operations,
• manage large volume of data and process them on Realtime and batch orientation as needed.
• while leveraging global technology stack and making localized improvements Track, document, and maintain software system functionality—both internally and externally, leveraging opportunities to improve engineering productivity
• Code review, Git operation, CI-CD, Mentor and assign task to junior team members

Responsibilities:
• Writing reusable, testable, and efficient code
• Design and implementation of low-latency, high-availability, and performant applications
• Integration of user-facing elements developed by front-end developers with server-side logic
• Implementation of security and data protection
• Integration of data storage solutions

Skills and Qualifications
• Bachelor’s degree in software engineering or information technology
• 5-7 years’ experience engineering software and networking platforms
• 5+ years professional experience with Python or Java or Scala.
• Strong experience in API development and API integration.
• proven knowledge on data migration, platform migration, CI-CD process, orchestration workflows like Airflow or Luigi or Azkaban etc.
• Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Hadoop, No-SQl platform
• Prior experience in Datawarehouse and OLAP design and deployment.
• Proven ability to document design processes, including development, tests, analytics, and troubleshooting
• Experience with rapid development cycles in a web-based/Multi Cloud environment
• Strong scripting and test automation abilities Good to have Qualifications
• Working knowledge of relational databases as well as ORM and SQL technologies
• Proficiency with Multi OS env, Docker and Kubernetes
• Proven experience designing interactive applications and largescale platforms
• Desire to continue to grow professional capabilities with ongoing training and educational opportunities.
Read more
Job posted by
Manu Panwar

Big Data Engineer

at Leading Digital marketing ageny

Big Data
Elastic Search
Hadoop
Apache Kafka
Apache Hive
icon
ukraine
icon
3 - 10 yrs
icon
₹15L - ₹30L / yr
Responsibility: • Studied Computer Science, • 5+ years of software development experience, • Must have experience in Elasticsearch (2+ years experience is preferable), • Skills in Java, Python or Scala, • Passionate about learning big data, data mining and data analysis technologies, • Self-motivated; independent, organized and proactive; highly responsive, flexible, and adaptable when working across multiple teams, • Strong SQL skills, including query optimization are required. • Experience working with large, complex datasets is required, • Experience with recommendation systems and data warehouse technologies are preferred, • You possess an intense curiosity about data and a strong commitment to practical problem-solving, • Creative in thinking data centric products which will be used in online customer behaviors and marketing, • Build systems to pull meaningful insights from our data platform, • Integrate our analytics platform internally across products and teams, • Focus on performance, throughput, latency and drive these throughout our architecture. Bonuses -Experience with big data architectures such as Lambda Architecture. -Experience working with big data technologies (like Hadoop, Java Map/Reduce, Hive, Spark SQL), real-time processing frameworks (like Spark Streaming, Storm, AWS Kinesis). -Proficiency in key-value stores such as : HBase/Cassandra, Redis, Riak and MongoDB -Experience with AWS EMR
Read more
Job posted by
Vidushi Singh

Data Engineer

at Pluto Seven Business Solutions Pvt Ltd

Founded 2017  •  Products & Services  •  20-100 employees  •  Raised funding
MySQL
Python
Big Data
Google Cloud Storage
API
SQL Query Analyzer
Relational Database (RDBMS)
Agile/Scrum
icon
Bengaluru (Bangalore)
icon
3 - 9 yrs
icon
₹6L - ₹18L / yr
Data Engineer: Pluto7 is a services and solutions company focused on building ML, Ai, Analytics, solutions to accelerate business transformation. We are a Premier Google Cloud Partner, servicing Retail, Manufacturing, Healthcare, and Hi-Tech industries.We’re seeking passionate people to work with us to change the way data is captured, accessed and processed, to make data driven insightful decisions. Must have skills : Hands-on experience in database systems (Structured and Unstructured). Programming in Python, R, SAS. Overall knowledge and exposure on how to architect solutions in cloud platforms like GCP, AWS, Microsoft Azure. Develop and maintain scalable data pipelines, with a focus on writing clean, fault-tolerant code. Hands-on experience in data model design, developing BigQuery/SQL (any variant) stored. Optimize data structures for efficient querying of those systems. Collaborate with internal and external data sources to ensure integrations are accurate, scalable and maintainable. Collaborate with business intelligence/analytics teams on data mart optimizations, query tuning and database design. Execute proof of concepts to assess strategic opportunities and future data extraction and integration capabilities. Must have at least 2 years of experience in building applications, solutions and products based on analytics. Data extraction, Data cleansing and transformation. Strong knowledge on REST APIs, Http Server, MVC architecture. Knowledge on continuous integration/continuous deployment. Preferred but not required: Machine learning and Deep learning experience Certification on any cloud platform is preferred. Experience of data migration from On-Prem to Cloud environment. Exceptional analytical, quantitative, problem-solving, and critical thinking skills Excellent verbal and written communication skills Work Location: Bangalore
Read more
Job posted by
Sindhu Narayan

Tech Lead

at Spire technologies

Founded 2007  •  Product  •  500-1000 employees  •  Profitable
Elastic Search
Java
RESTful APIs
icon
Bengaluru (Bangalore)
icon
4 - 8 yrs
icon
₹12L - ₹25L / yr
Hope you are doing well. I manage hiring for Spire - the global leader in Contextual Search and Analytics. I came across your profile in the portal while looking for a Lead / Sr. Search engineers. What we expect from a candidate: Experience: 5-10 years Skills Required: Elastic Search expertise minimum 2 years (Understanding of cluster management, shards, ES Mappings, Analyzers, plugins etc). Hands on with ES version 5.x and later Java and J2EE expertise Exposure to distributed and micro service architecture Design and Architecture capabelity Agile development practices Ability to understand business problems and translate them to technical solutions Ability to work in fast paced environment and to drive people Desired Skills: Expertise on designing and developing search systems Exposure to voice based search, text based search About Spire: We at Spire are redefining search and match technology to deliver contextual business outcomes, we provide contextual meaning out of ANY content whether it is text, audio or video. We have built world’s first Context Intelligence Platform offered as PaaS which can be applied to any functional domain in any industry. We are on an adventurous journey transforming businesses and human capital of global enterprise customers in North America & Europe. We are ranked the 6th Fastest Technology Company in India - Deloitte 2016. We are based in Cessna Park, Outer Ring Road, Bangalore. You can learn more about us at http://www.spiretechnologies.com/india/ and http://spiretalentship.com/. Please let me know your interest in exploring this opportunity with us.
Read more
Job posted by
Gowtham Dev
Did not find a job you were looking for?
icon
Search for relevant jobs from 10000+ companies such as Google, Amazon & Uber actively hiring on Cutshort.
Get to hear about interesting companies hiring right now
iconFollow Cutshort
Want to apply to this role at NSEIT?
Why apply via Cutshort?
Connect with actual hiring teams and get their fast response. No spam.
Learn more
Get to hear about interesting companies hiring right now
iconFollow Cutshort