• Responsibilities:
o Should be able to work with API, shards etc in Elasticsearch.
o Write parser in Logstash
o Create Dashboards in Kibana
• Mandatory Experience.
o Must have very good understanding of Log Analytics
o Hands on experience in Elasticsearch, logstash & Kibana should be at expert level
o Elasticsearch : Should be able to write Kibana API
o Logstash : Should be able to write parsers.
o Kibana : Create different visualization and dashboards according to the Client needs
o Scripts : Should be able to write scripts in linux.
About NSEIT
NSEIT is a global technology firm with a focus on the financial services industry. We are a vertical specialist organization with domain expertise and technology focus aligned to the needs of financial institutions. We offer Application Services, IT Enabled Services (Assessments), Testing Center of Excellence, Infrastructure Services, Integrated Security Response Center and Analytics as a Service primarily for the BFSI segment.
We are a 100% subsidiary of National Stock Exchange of India Limited (NSEIL). Being a part of the stock exchange our solutions inherently encapsulate industry strength, security, scalability, reliability and performance features.
Our focus on domain and key technologies enables us to use new trends in digital technologies like cloud computing, mobility and analytics while building solutions for our customers.
We are passionate about building innovative, futuristic and robust solutions for our customers. We have been assessed at Maturity Level 5 in Capability Maturity Model Integration for Development (CMMI® - DEV) v 1.3. We are also certified for ISO 9001:2015 for providing high quality products and services, and ISO 27001:2013 for our Information Security Management Systems.
Our offices are located in India and the US.
Similar jobs
The Platform Data Science team works at the intersection of data science and engineering. Domain experts develop and advance platforms, including the data platforms, machine learning platform, other platforms for Forecasting, Experimentation, Anomaly Detection, Conversational AI, Underwriting of Risk, Portfolio Management, Fraud Detection & Prevention and many more. We also are the Data Science and Analytics partners for Product and provide Behavioural Science insights across Jupiter.
About the role:
We’re looking for strong Software Engineers that can combine EMR, Redshift, Hadoop, Spark, Kafka, Elastic Search, Tensorflow, Pytorch and other technologies to build the next generation Data Platform, ML Platform, Experimentation Platform. If this sounds interesting we’d love to hear from you!
This role will involve designing and developing software products that impact many areas of our business. The individual in this role will have responsibility help define requirements, create software designs, implement code to these specifications, provide thorough unit and integration testing, and support products while deployed and used by our stakeholders.
Key Responsibilities:
Participate, Own & Influence in architecting & designing of systems
Collaborate with other engineers, data scientists, product managers
Build intelligent systems that drive decisions
Build systems that enable us to perform experiments and iterate quickly
Build platforms that enable scientists to train, deploy and monitor models at scale
Build analytical systems that drives better decision making
Required Skills:
Programming experience with at least one modern language such as Java, Scala including object-oriented design
Experience in contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems
Bachelor’s degree in Computer Science or related field
Computer Science fundamentals in object-oriented design
Computer Science fundamentals in data structures
Computer Science fundamentals in algorithm design, problem solving, and complexity analysis
Experience in databases, analytics, big data systems or business intelligence products:
Data lake, data warehouse, ETL, ML platform
Big data tech like: Hadoop, Apache Spark
engineering
2. Preferably should have done some project or internship related to the field
3. Knowledge of SQL is a plus
4. A deep desire to learn new things and be a part of a vibrant start-up.
5. You will have a lot of freehand and this comes with immense responsibility - so it
is expected that you will be willing to master new things that come along!
Job Description:
1. Design and build a pipeline to train models for NLP problems like Classification,
NER
2. Develop APIs that showcase our models' capabilities and enable third-party
integrations
3. Work across a microservices architecture that processes thousands of
documents per day.
- 4+ years of experience Solid understanding of Python, Java and general software development skills (source code management, debugging, testing, deployment etc.).
- Experience in working with Solr and ElasticSearch Experience with NLP technologies & the handling of unstructured text Detailed understanding of text pre-processing and normalisation techniques such as tokenisation, lemmatisation, stemming, POS tagging etc.
- Prior experience in implementation of traditional ML solutions - classification, regression or clustering problem Expertise in text-analytics - Sentiment Analysis, Entity Extraction, Language modelling - and associated sequence learning models ( RNN, LSTM, GRU).
- Comfortable working with deep-learning libraries (eg. PyTorch)
- Candidate can even be a fresher with 1 or 2 years of experience IIIT, IIIT, Bits Pilani, top 5 local colleges are preferred colleges and universities.
- A Masters candidate in machine learning.
- Can source candidates from Mu Sigma and Manthan.
Software Architect/CTO
at Blenheim Chalcot IT Services India Pvt Ltd
Data Warehouse and Analytics solutions that aggregate data across diverse sources and data types
including text, video and audio through to live stream and IoT in an agile project delivery
environment with a focus on DataOps and Data Observability. You will work with Azure SQL
Databases, Synapse Analytics, Azure Data Factory, Azure Datalake Gen2, Azure Databricks, Azure
Machine Learning, Azure Service Bus, Azure Serverless (LogicApps, FunctionApps), Azure Data
Catalogue and Purview among other tools, gaining opportunities to learn some of the most
advanced and innovative techniques in the cloud data space.
You will be building Power BI based analytics solutions to provide actionable insights into customer
data, and to measure operational efficiencies and other key business performance metrics.
You will be involved in the development, build, deployment, and testing of customer solutions, with
responsibility for the design, implementation and documentation of the technical aspects, including
integration to ensure the solution meets customer requirements. You will be working closely with
fellow architects, engineers, analysts, and team leads and project managers to plan, build and roll
out data driven solutions
Expertise:
Proven expertise in developing data solutions with Azure SQL Server and Azure SQL Data Warehouse (now
Synapse Analytics)
Demonstrated expertise of data modelling and data warehouse methodologies and best practices.
Ability to write efficient data pipelines for ETL using Azure Data Factory or equivalent tools.
Integration of data feeds utilising both structured (ex XML/JSON) and flat schemas (ex CSV,TXT,XLSX)
across a wide range of electronic delivery mechanisms (API/SFTP/etc )
Azure DevOps knowledge essential for CI/CD of data ingestion pipelines and integrations.
Experience with object-oriented/object function scripting languages such as Python, Java, JavaScript, C#,
Scala, etc is required.
Expertise in creating technical and Architecture documentation (ex: HLD/LLD) is a must.
Proven ability to rapidly analyse and design solution architecture in client proposals is an added advantage.
Expertise with big data tools: Hadoop, Spark, Kafka, NoSQL databases, stream-processing systems is a plus.
Essential Experience:
5 or more years of hands-on experience in a data architect role with the development of ingestion,
integration, data auditing, reporting, and testing with Azure SQL tech stack.
full data and analytics project lifecycle experience (including costing and cost management of data
solutions) in Azure PaaS environment is essential.
Microsoft Azure and Data Certifications, at least fundamentals, are a must.
Experience using agile development methodologies, version control systems and repositories is a must.
A good, applied understanding of the end-to-end data process development life cycle.
A good working knowledge of data warehouse methodology using Azure SQL.
A good working knowledge of the Azure platform, it’s components, and the ability to leverage it’s
resources to implement solutions is a must.
Experience working in the Public sector or in an organisation servicing Public sector is a must,
Ability to work to demanding deadlines, keep momentum and deal with conflicting priorities in an
environment undergoing a programme of transformational change.
The ability to contribute and adhere to standards, have excellent attention to detail and be strongly driven
by quality.
Desirables:
Experience with AWS or google cloud platforms will be an added advantage.
Experience with Azure ML services will be an added advantage Personal Attributes
Articulated and clear in communications to mixed audiences- in writing, through presentations and one-toone.
Ability to present highly technical concepts and ideas in a business-friendly language.
Ability to effectively prioritise and execute tasks in a high-pressure environment.
Calm and adaptable in the face of ambiguity and in a fast-paced, quick-changing environment
Extensive experience working in a team-oriented, collaborative environment as well as working
independently.
Comfortable with multi project multi-tasking consulting Data Architect lifestyle
Excellent interpersonal skills with teams and building trust with clients
Ability to support and work with cross-functional teams in a dynamic environment.
A passion for achieving business transformation; the ability to energise and excite those you work with
Initiative; the ability to work flexibly in a team, working comfortably without direct supervision.
We are looking for passionate, talented and super-smart engineers to join our product development team. If you are someone who innovates, loves solving hard problems, and enjoys end-to-end product development, then this job is for you! You will be working with some of the best developers in the industry in a self-organising, agile environment where talent is valued over job title or years of experience.
Responsibilities:
- You will be involved in end-to-end development of VIMANA technology, adhering to our development practices and expected quality standards.
- You will be part of a highly collaborative Agile team which passionately follows SAFe Agile practices, including pair-programming, PR reviews, TDD, and Continuous Integration/Delivery (CI/CD).
- You will be working with cutting-edge technologies and tools for stream processing using Java, NodeJS and Python, using frameworks like Spring, RxJS etc.
- You will be leveraging big data technologies like Kafka, Elasticsearch and Spark, processing more than 10 Billion events per day to build a maintainable system at scale.
- You will be building Domain Driven APIs as part of a micro-service architecture.
- You will be part of a DevOps culture where you will get to work with production systems, including operations, deployment, and maintenance.
- You will have an opportunity to continuously grow and build your capabilities, learning new technologies, languages, and platforms.
Requirements:
- Undergraduate degree in Computer Science or a related field, or equivalent practical experience.
- 2 to 5 years of product development experience.
- Experience building applications using Java, NodeJS, or Python.
- Deep knowledge in Object-Oriented Design Principles, Data Structures, Dependency Management, and Algorithms.
- Working knowledge of message queuing, stream processing, and highly scalable Big Data technologies.
- Experience in working with Agile software methodologies (XP, Scrum, Kanban), TDD and Continuous Integration (CI/CD).
- Experience using no-SQL databases like MongoDB or Elasticsearch.
- Prior experience with container orchestrators like Kubernetes is a plus.
We build products and platforms for the Industrial Internet of Things. Our technology is being used around the world in mission-critical applications - from improving the performance of manufacturing plants, to making electric vehicles safer and more efficient, to making industrial equipment smarter.
Please visit https://govimana.com/ to learn more about what we do.
Why Explore a Career at VIMANA
- We recognize that our dedicated team members make us successful and we offer competitive salaries.
- We are a workplace that values work-life balance, provides flexible working hours, and full time remote work options.
- You will be part of a team that is highly motivated to learn and work on cutting edge technologies, tools, and development practices.
- Bon Appetit! Enjoy catered breakfasts, lunches and free snacks!
VIMANA Interview Process
We usually target to complete all the interviews in a week's time and would provide prompt feedback to the candidate. As of now, all the interviews are conducted online due to covid situation.
1.Telephonic screening (30 Min )
A 30 minute telephonic interview to understand and evaluate the candidate's fit with the job role and the company.
Clarify any queries regarding the job/company.
Give an overview about further interview rounds
2. Technical Rounds
This would be deep technical round to evaluate the candidate's technical capability pertaining to the job role.
3. HR Round
Candidate's team and cultural fit will be evaluated during this round
We would proceed with releasing the offer if the candidate clears all the above rounds.
Note: In certain cases, we might schedule additional rounds if needed before releasing the offer.
Objectives of this Role:
• Design, and develop creative and innovative frameworks/components for data platforms, as we continue to experience dramatic growth in the usage and visibility of our products
• work closely with data scientist and product owners to come up with better design/development approach for application and platform to scale and serve the needs.
• Examine existing systems, identifying flaws and creating solutions to improve service uptime and time-to-resolve through monitoring and automated remediation
• Plan and execute full software development life cycles (SDLC) for each assigned project, adhering to company standards and expectations Daily and Monthly Responsibilities:
• Design and build tools/frameworks/scripts to automate development, testing deployment, management and monitoring of the company’s 24x7 services and products
• Plan and scale distributed software and applications, applying synchronous and asynchronous design patterns, write code, and deliver with urgency and quality
• Collaborate with global team, producing project work plans and analyzing the efficiency and feasibility of project operations,
• manage large volume of data and process them on Realtime and batch orientation as needed.
• while leveraging global technology stack and making localized improvements Track, document, and maintain software system functionality—both internally and externally, leveraging opportunities to improve engineering productivity
• Code review, Git operation, CI-CD, Mentor and assign task to junior team members
Responsibilities:
• Writing reusable, testable, and efficient code
• Design and implementation of low-latency, high-availability, and performant applications
• Integration of user-facing elements developed by front-end developers with server-side logic
• Implementation of security and data protection
• Integration of data storage solutions
Skills and Qualifications
• Bachelor’s degree in software engineering or information technology
• 5-7 years’ experience engineering software and networking platforms
• 5+ years professional experience with Python or Java or Scala.
• Strong experience in API development and API integration.
• proven knowledge on data migration, platform migration, CI-CD process, orchestration workflows like Airflow or Luigi or Azkaban etc.
• Experience with data engineering tools and platforms such as Kafka, Spark, Databricks, Hadoop, No-SQl platform
• Prior experience in Datawarehouse and OLAP design and deployment.
• Proven ability to document design processes, including development, tests, analytics, and troubleshooting
• Experience with rapid development cycles in a web-based/Multi Cloud environment
• Strong scripting and test automation abilities Good to have Qualifications
• Working knowledge of relational databases as well as ORM and SQL technologies
• Proficiency with Multi OS env, Docker and Kubernetes
• Proven experience designing interactive applications and largescale platforms
• Desire to continue to grow professional capabilities with ongoing training and educational opportunities.
Data Engineer
at Pluto Seven Business Solutions Pvt Ltd