
1. Core Responsibilities
· Review, suggest and implement enhancements/Bug fixes to the ServiceNow platform.
· Work closely with other IT teams to help implement integrations from other platforms(like Monitoring tools: Nagios, Prometheus, Sematext, Dynatrace etc., ) into the ServiceNow ecosystem.
· Attend important business meetings to gather information around projects pertaining to ServiceNow.
· Help to maintain and improve the CMDB by collaborating with key stakeholders to ensure the correct data is being maintained.
· Help to manage the platform to ensure a reliable seamless user experience.
· Develop and maintain service catalogue items by collaborating with key stakeholders across the business.
· Support the banks audit requirements around the ServiceNow platform by helping to provide reports and audits as required.
· Support audit requirements and compliance to standards
· Should have knowledge on creating customized Dashboards & Reports
· Automation using ServiceNow (like Major Incident Management, Incident Reduction, Problem Management etc.,) , if any
· Should be able to drive Service Improvement Plan’s in optimizing ServiceNow platform on their own
· Maintain the company’s compliance standards and ensure timely completion of all mandatory on-line training modules and attestations.
2. Experience Requirements
Essential:
· 4 to 6 years previous experience in ServiceNow administration OR Technical work on ServiceNow design and implementation is essential
· 4 to 6 years previous experience in delivering ServiceNow projects (new modules, improvements, enhancements etc.) is essential
· 4 to 6 years previous experience or equivalent qualification in Service Now ITSM & ITOM is essential
· 8 to 10 years overall experience in IT is essential
Desirable
· 3 to 5 years’ experience in orchestration, service mapping is desirable
3. Knowledge Requirements
Essential
· Very good knowledge of Incident Management, Request Fulfilment, Change Management, Problem Management processes
· Very good knowledge of ITSM and ITOM practices is essential
· Detailed knowledge of the ITIL/ITSM Best practices is essential
Desirable
· Good understanding of CSDM is desirable
· Good knowledge of the ISO 20K, 27K, 9K is desirable
· Basic knowledge of IT Infrastructure technologies used in a banking domain in desirable

About OSBIndia Private Limited
About
Similar jobs
About TVARIT
TVARIT GmbH specializes in developing and delivering cutting-edge artificial intelligence (AI) solutions for the metal industry, including steel, aluminum, copper, cast iron, and more. Our software products empower customers to make intelligent, data-driven decisions, driving advancements in Predictive Quality (PsQ), Predictive Maintenance (PdM), and Energy Consumption Reduction (PsE), etc. With a strong portfolio of renowned reference customers, state-of-the-art technology, a talented research team from prestigious universities, and recognition through esteemed awards such as the EU Horizon 2020 AI Prize, TVARIT is recognized as one of the most innovative AI companies in Germany and Europe. We are seeking a self-motivated individual with a positive "can-do" attitude and excellent oral and written communication skills in English to join our team.
Job Description: We are looking for a Senior Data Engineer with strong expertise in Azure Databricks, PySpark, and distributed computing to develop and optimize scalable ETL pipelines for manufacturing analytics. The role involves working with high-frequency industrial data to enable real-time and batch data processing.
Key Responsibilities · Build scalable real-time and batch processing workflows using Azure Databricks, PySpark, and Apache Spark.
· Perform data pre-processing, including cleaning, transformation, deduplication, normalization, encoding, and scaling to ensure high-quality input for downstream analytics.
· Design and maintain cloud-based data architectures, including data lakes, lakehouses, and warehouses, following Medallion Architecture.
· Deploy and optimize data solutions on Azure (preferred), AWS, or GCP with a focus on performance, security, and scalability.
· Develop and optimize ETL/ELT pipelines for structured and unstructured data from IoT, MES, SCADA, LIMS, and ERP systems. · Automate data workflows using CI/CD and DevOps best practices, ensuring security and compliance with industry standards
· Monitor, troubleshoot, and enhance data pipelines for high availability and reliability.
· Utilize Docker and Kubernetes for scalable data processing.
· Collaborate with automation team, data scientists and engineers to provide clean, structured data for AI/ML models.
Desired Skills and Qualifications · Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.
· 7+ years of experience in core data engineering, with a strong focus on cloud platforms such as Azure (preferred), AWS, or GCP · Proficiency in PySpark, Azure Databricks, Python and Apache Spark, etc.
. 2 years of team handling experience.
· Expertise in relational databases (e.g., SQL Server, PostgreSQL), time series databases (e.g. Influx DB), and NoSQL databases (e.g., MongoDB, Cassandra) · Experience in containerization (Docker, Kubernetes).
· Strong analytical and problem-solving skills with attention to detail.
· Good to have MLOps, DevOps including model lifecycle management
· Excellent communication and collaboration skills, with a proven ability to work effectively as a team player.
· Comfortable working in a dynamic, fast-paced startup environment, adapting quickly to changing priorities and responsibilities.
About the job
Company Overview: Abracon is a leading global supplier of Timing, Power, and RF Components, dedicated to transforming ideas into innovative products that meet the challenges of tomorrow.
Position Summary: We are seeking a skilled Product Engineer to join our Global Support Team, focusing on our Power and Magnetic product line. This role involves supporting components vital for power and filtering circuitry, including RF and Power inductors, Common mode Chokes (CMC), Transformers, Ethernet magnetics, ferrite beads, Supercapacitors and other power management components.
Work Environment: The successful candidate will be positioned as a full-time employee at Abracon’s Center of Excellence located in Bangalore, India. The candidate will have the option to engage in a hybrid work-from-home arrangement.
Growth Opportunity:
Key Responsibilities:
Qualifications:
Come join our team! Excellence through teamwork drives the company culture at Abracon. We provide opportunities and support for your professional growth and empower each other in exercising our individual strengths.
With a broad portfolio of passive and electromechanical timing, synchronization, power, connectivity and RF antenna solutions and more, Abracon helps engineers transform their ideas into products that meet future customer needs. For more information about Abracon, visit www.abracon.com
Job responsibilities
- Performs development, deployment, administration, management, configuration, testing, and integration tasks related to the cloud security platforms.
- Develops automated security and compliance capabilities in support of DevOps processes in a large-scale computing environment for the storage program within the firm.
- Champions a DevOps security model so that security is automated and elastic across all platforms and cultivate a cloud first mindset in transitioning workloads.
- Leverages DevOps tools to build, harden, maintain, and instrument a comprehensive security orchestration platform for infrastructure as code.
- Provides support to drive the maturity of the Cybersecurity software development lifecycle and develop & improve the quality of technical engineering documentation.
- Makes decisions of a global, strategic nature by analyzing complex data systems and incorporating knowledge of other lines of business & JPMC standards.
- Provides quality control of engineering deliverables, technical consultation to product management and technical interface between development and operations teams.
Required qualifications, capabilities, and skills
- Formal training or certification on Security engineering and 3+ years applied experience
- Proficiency in programming languages like Python or Java with strong coding skills
- Understanding of one or more Public Cloud platforms( AWS/ GCP/ Azure)
- Experience with highly scalable systems, release management, software configuration, design, development, and implementation is required
- Ability to analyzing complex data systems – failure analysis / root cause analysis, developing, improving, and maintaining technical engineering documentation
Responsibilities:
- Be the analytical expert in Kaleidofin, managing ambiguous problems by using data to execute sophisticated quantitative modeling and deliver actionable insights.
- Develop comprehensive skills including project management, business judgment, analytical problem solving and technical depth.
- Become an expert on data and trends, both internal and external to Kaleidofin.
- Communicate key state of the business metrics and develop dashboards to enable teams to understand business metrics independently.
- Collaborate with stakeholders across teams to drive data analysis for key business questions, communicate insights and drive the planning process with company executives.
- Automate scheduling and distribution of reports and support auditing and value realization.
- Partner with enterprise architects to define and ensure proposed.
- Business Intelligence solutions adhere to an enterprise reference architecture.
- Design robust data-centric solutions and architecture that incorporates technology and strong BI solutions to scale up and eliminate repetitive tasks.
- Experience leading development efforts through all phases of SDLC.
- 2+ years "hands-on" experience designing Analytics and Business Intelligence solutions.
- Experience with Quicksight, PowerBI, Tableau and Qlik is a plus.
- Hands on experience in SQL, data management, and scripting (preferably Python).
- Strong data visualisation design skills, data modeling and inference skills.
- Hands-on and experience in managing small teams.
- Financial services experience preferred, but not mandatory.
- Strong knowledge of architectural principles, tools, frameworks, and best practices.
- Excellent communication and presentation skills to communicate and collaborate with all levels of the organisation.
- Preferred candidates with less than 30 days notice period.
Responsibilities:
- Work with the client from the start of each project to ensure you understand the project scope and vision
- Oversee the beginning of each turn-key project, including details like permit submission and design evaluations
- Create the schedule for each project and match talent to the job
- Process change orders
- Collaborate with the architect and construction crew to ensure feasibility of each project
- Conduct meetings on-site with architect, client and construction crew.
- Prepare and submit project estimates to clients
JOB TITLE- JAVA FULL STACK DEVELOPER
REQUIRED SKILLS
- Backend: Core Java, Spring Boot, Microservices, JPA, Hibernate, Data Structure and
Algorithms and Restful Web Services
- Front end: HTML, CSS, JavaScript, Vue.JS
- Database : MySQL
- Version Control Tool: Git
- Cloud Service: AWS
JOB DESCRIPTION
Commercial experience using the Full stack Java Development
Should be strong on basics of core Java - Basic Oops concepts, String,
Collections, Exceptions, Interface, Inheritance
Spring Family - Spring Boot, Spring Core
Experience in creating Microservice using REST & Spring Boot
Should be familiar with UI HTML, CSS, JavaScript, and Vue.Js
Frameworks.
Hands-on experience on Oracle or SQL Server database
Knowledgeable in AWS cloud.
Agile Experience: Should be able to understand Scrum ceremonies and able to
demonstrate experience in Agile
Responsibilities:
- Build React Native apps from scratch that'll be used globally.
- Architect App infrastructure systems like caching, state manager, common validation framework, etc.
- Launch the apps in different app stores like Apple, Google, etc.
- Create custom reusable Component Library
- Create data visualizations like charts, tables, dashboards, etc.
- Write well tested, beautiful, performant and bug free code.
- Thrive in a collaborative team environment and work with challenging timelines.
- Establish and advocate front-end coding guidelines.
- Mentor team members and help resolve any critical blockers.
Requirements:
- A self-driven, hard-working Frontend engineer with an eye for detail.
- Bachelor's Degree or equivalent in Computer Science
- Minimum 4 years of overall experience with expertise in working in React Native, Redux and TypeScript
- Strong understanding and knowledge of Styling
- Experience in writing native code for Android and iOS
- Strong debugging skills for Android and iOS.
- Experience in launching apps in various App Stores.
- Experience with GraphQL is a plus.
- Familiarity with modern front-end build pipelines and tools
About Us
Spotnana is a stealth startup that is building the technology stack to solve the experience, trust, and transparency challenges that legacy players in a $1.8 trillion industry refuse to. We were told it can’t be done - and now we’re doing it.
Based in the San Francisco Bay Area, our founding team consists of veterans from Google, Amazon, Twitter, Airbnb, Expedia, ThoughtSpot, FreshDesk, Goldman Sachs, Oyo, SAP, and Cohesity
Join us to find out what Silicon Valley investors and early backers of Airbnb, Amazon, Facebook, LinkedIn, Brex, Palantir, Revolut, and Snowflake see in our mission.
Let's explore creating one of the best pieces of work ever made, together.
Job Description
Niki is an artificially intelligent ordering application (http://niki.ai/app" target="_blank">niki.ai/app). Our founding team is from IIT Kharagpur, and we are looking for a Natural Language Processing Engineer to join our engineering team.
The ideal candidate will have industry experience solving language-related problems using statistical methods on vast quantities of data available from Indian mobile consumers and elsewhere.
Major responsibilities would be:
1. Create language models from text data. These language models draw heavily from statistical, deep learning as well as rule based research in recent times around building taggers, parsers, knowledge graph based dictionaries etc.
2. Develop highly scalable classifiers and tools leveraging machine learning, data regression, and rules based models
3. Work closely with product teams to implement algorithms that power user and developer-facing products
We work mostly in Java and Python and object oriented concepts are a must to fit in the team. Basic eligibility criteria are:
1. Graduate/Post-Graduate/M.S./
2. Industry experience of min 5 years.
3. Strong background in Natural Language Processing and Machine Learning
4. Have some experience in leading a team big or small.
5. Experience with Hadoop/Hbase/Pig or MaprReduce/Sawzall/Bigtable is a plus
Competitive Compensation.
What We're Building
We are building an automated messaging platform to simplify ordering experience for consumers. We have launched the Android App: http://niki.ai/app" target="_blank">niki.ai/app . In the current avatar, Niki can process mobile phone recharge and book cabs for the consumers. It assists in finding the right recharge plans across topup, 2g, 3g and makes the transaction. In cab booking, it helps in end to end booking along with tracking and cancellation within the App. You may also compare to get the nearest or the cheapest cab among available ones.
Being an instant messaging App, it works seamlessly on 2G / 3G / Wifi and is light weight around 3.6 MB. You may check out using: https://niki.ai/" target="_blank">niki.ai app











