
We are looking for a Director of Engineering to lead one of our key product engineering teams. This role will report directly to the VP of Engineering and will be responsible for successful execution of the company's business mission through development of cutting-edge software products and solutions.
- As an owner of the product you will be required to plan and execute the product road map and provide technical leadership to the engineering team.
- You will have to collaborate with Product Management and Implementation teams and build a commercially successful product.
- You will be responsible to recruit & lead a team of highly skilled software engineers and provide strong hands on engineering leadership.
- Requirement deep technical knowledge in Software Product Engineering using Amazon Web Services,Java 8 Java/J2EE, Node.js, React.js, fullstack, NosqlDB, mongodb, cassandra, neo4j, elastic search, kibana, elk, kafka, redis, docker, kubernetes, Amazon Web Services ,Architecture Concepts,Design PatternsData Structures & Algorithms,Distributed Computing,Multi-threading,AWS,Docker,Kubernetes, apache, solr, activemq, rabbitmq, spark, scala, sqoop, hbase, hive, websocket, webcrawler, springboot, etc. is a must.
- 16+ years of experience in Software Engineering with at least 5+ years as an engineering leader in a software product company.
- Hands-on technical leadership with proven ability to recruit high performance talent
- High technical credibility - ability to audit technical decisions and push for the best solution to a problem.
- Experience building E2E Application right from backend database to persistent layer.
- Experience UI technologies Angular, react.js, Node.js or fullstack environment will be preferred.
- Experience with NoSQL technologies (MongoDB, Cassandra, Neo4j, Dynamodb, etc.)
- Elastic Search, Kibana, ELK, Logstash.
- Experience in developing Enterprise Software using Agile Methodology.
- Good understanding of Kafka, Redis, ActiveMQ, RabbitMQ, Solr etc.
- SaaS cloud-based platform exposure.
- Experience on Docker, Kubernetes etc.
- Ownership E2E design development and also quality enterprise product/application deliverable exposure
- A track record of setting and achieving high standards
- Strong understanding of modern technology architecture
- Key Programming Skills: Java, J2EE with cutting edge technologies
- Excellent team building, mentoring and coaching skills are a must-have

About Zycus
About
Connect with the team
Similar jobs
Job Title: Software Development Engineer – III (SDE-III)
Location: Sector 55, Gurugram (Onsite)
Work Timings: Regular day shift, 5 days working
About Master-O
Master-O is a next-generation sales enablement and microskill learning platform designed to empower frontline sales teams through gamification, AI-driven coaching, and just-in-time learning. We work closely with large enterprises to improve sales readiness, productivity, and on-ground performance at scale.
As we continue to build intelligent, scalable, and enterprise-ready products, we are looking for a seasoned SDE-III who can take ownership of complex modules, mentor engineers, and contribute to architectural decisions.
Role Overview
As an SDE-III at Master-O, you will play a critical role in designing, building, and scaling core product features used by large enterprises with high user volumes. You will work closely with Product, Design, and Customer Success teams to deliver robust, high-performance solutions while ensuring best engineering practices.
This is a hands-on role requiring strong technical depth, system thinking, and the ability to work in a fast-paced B2B SaaS environment.
Required Skills & Experience
- 4–5 years of full-time professional experience in software development
- Strong hands-on experience with:
- React.js
- Node.js & Express.js
- JavaScript
- MySQL
- AWS
- Prior experience working in B2B SaaS companies (preferred)
- Experience handling enterprise-level applications with high concurrent users
- Solid understanding of REST APIs, authentication, authorization, and backend architecture
- Strong problem-solving skills and ability to write clean, maintainable, and testable code
- Comfortable working in an onsite, collaborative team environment
Good to Have
- Experience working with or integrating LLMs, AI assistants, or Agentic AI systems
- Experience with cloud platforms and deployment workflows
- Prior experience in EdTech, Sales Enablement, or Enterprise Productivity tools
Why Join Master-O?
- Opportunity to build AI-first, enterprise-grade products from the ground up
- High ownership role with real impact on product direction and architecture
- Work on meaningful problems at the intersection of sales, learning, and AI
- Collaborative culture with fast decision-making and minimal bureaucracy
- Be part of a growing product company shaping the future of sales readiness
Job Title: AI Engineer
Location: Bengaluru
Experience: 3 Years
Working Days: 5 Days
About the Role
We’re reimagining how enterprises interact with documents and workflows—starting with BFSI and healthcare. Our AI-first platforms are transforming credit decisioning, document intelligence, and underwriting at scale. The focus is on Intelligent Document Processing (IDP), GenAI-powered analysis, and human-in-the-loop (HITL) automation to accelerate outcomes across lending, insurance, and compliance workflows.
As an AI Engineer, you’ll be part of a high-caliber engineering team building next-gen AI systems that:
- Power robust APIs and platforms used by underwriters, credit analysts, and financial institutions.
- Build and integrate GenAI agents.
- Enable “human-in-the-loop” workflows for high-assurance decisions in real-world conditions.
Key Responsibilities
- Build and optimize ML/DL models for document understanding, classification, and summarization.
- Apply LLMs and RAG techniques for validation, search, and question-answering tasks.
- Design and maintain data pipelines for structured and unstructured inputs (PDFs, OCR text, JSON, etc.).
- Package and deploy models as REST APIs or microservices in production environments.
- Collaborate with engineering teams to integrate models into existing products and workflows.
- Continuously monitor and retrain models to ensure reliability and performance.
- Stay updated on emerging AI frameworks, architectures, and open-source tools; propose improvements to internal systems.
Required Skills & Experience
- 2–5 years of hands-on experience in AI/ML model development, fine-tuning, and building ML solutions.
- Strong Python proficiency with libraries such as NumPy, Pandas, scikit-learn, PyTorch, or TensorFlow.
- Solid understanding of transformers, embeddings, and NLP pipelines.
- Experience working with LLMs (OpenAI, Claude, Gemini, etc.) and frameworks like LangChain.
- Exposure to OCR, document parsing, and unstructured text analytics.
- Familiarity with model serving, APIs, and microservice architectures (FastAPI, Flask).
- Working knowledge of Docker, cloud environments (AWS/GCP/Azure), and CI/CD pipelines.
- Strong grasp of data preprocessing, evaluation metrics, and model validation workflows.
- Excellent problem-solving ability, structured thinking, and clean, production-ready coding practices.
CTC: up to 20 LPA
Required Skills:
- Strong experience in SAP EWM Technical Development.
- Proficiency in ABAP (Reports, Interfaces, Enhancements, Forms, BAPIs, BADIs).
- Hands-on experience with RF developments, PPF framework, and queue monitoring.
- Understanding of EWM master data, inbound/outbound processes, and warehouse tasks.
- Experience with SAP integration technologies (IDoc, ALE, Web Services).
- Good analytical, problem-solving, and communication skills.
Nice to Have:
- Exposure to S/4HANA EWM.
- Knowledge of Functional EWM processes.
- Experience in Agile / DevOps environments.
If interested kindly share your updated resume on 82008 31681
-
Responsibilities
- Responsible for implementation and ongoing administration of Hadoop
infrastructure.
- Aligning with the systems engineering team to propose and deploy new
hardware and software environments required for Hadoop and to expand existing
environments.
- Working with data delivery teams to setup new Hadoop users. This job includes
setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig
and MapReduce access for the new users.
- Cluster maintenance as well as creation and removal of nodes using tools like
Ganglia, Nagios, Cloudera Manager Enterprise, Dell Open Manage and other tools
- Performance tuning of Hadoop clusters and Hadoop MapReduce routines
- Screen Hadoop cluster job performances and capacity planning
- Monitor Hadoop cluster connectivity and security
- Manage and review Hadoop log files.
- File system management and monitoring.
- Diligently teaming with the infrastructure, network, database, application and
business intelligence teams to guarantee high data quality and availability
- Collaboration with application teams to install operating system and Hadoop
updates, patches, version upgrades when required.
READ MORE OF THE JOB DESCRIPTION
Qualifications
Qualifications
- Bachelors Degree in Information Technology, Computer Science or other
relevant fields
- General operational expertise such as good troubleshooting skills,
understanding of systems capacity, bottlenecks, basics of memory, CPU, OS,
storage, and networks.
- Hadoop skills like HBase, Hive, Pig, Mahout
- Ability to deploy Hadoop cluster, add and remove nodes, keep track of jobs,
monitor critical parts of the cluster, configure name node high availability, schedule
and configure it and take backups.
- Good knowledge of Linux as Hadoop runs on Linux.
- Familiarity with open source configuration management and deployment tools
such as Puppet or Chef and Linux scripting.
Nice to Have
- Knowledge of Troubleshooting Core Java Applications is a plus.
STATE IN CHARGE
Education
Any Bachelor Degree
Job Description
Establishing good relationships with Bank's LHO, RBO, DSH & Link Branches.
- Appointment of Kiosk operators in allocated URBAN locations.
- Responsible for ensuring completion & submission of documents at the Bank's office for opening CSPs.
- Ensuring proper selection of CSP location & Kiosk Operator with vision of business sustainability.
- Working towards increasing business performance of CSP's
- Controlling & monitoring of CSP's to avoid Frauds.
- Manages all state level activities and coordination .
- Representation at regional level offices as well as district level offices.
- Any other work; as and when required pertaining Financial Inclusion business
Requirement of Candidate:
1. Graduate
2. Min Exp. of 5 years
3. financial inclusion (BFSI) sector
4. Should have handled a team .
5. Computer Skills - Mails, MS Excel, MS Word.
6. Good Communication skills (English and local Language) - Written & Oral
Provide guidance, mentorship and effective knowledge management within the team to ensure profitability
#business
- Identify, assess and manage risks arising out of operational plan, design or delivery
- Manage multiple stakeholders and work closely with them.
- Ensure a balance with sometimes, competing needs and priorities
- Develop guidelines for effective reflection, learning, and change
- Work on the results from learning and reflection exercise to create best practices and introduce process improvements
Employment terms: Full time
Preferred Date of joining: Latest possible
Travel expenses & Mobile reimbursement: As per the team budget and requirement of the role.
Hands on experience in System Design , Architecture.
Exposure to Microservices.
Experience in B2C.
Skills- JAVA / Python /Golang /C++
Must have experience in Knowledge Base and Document & media OOBT portlet.
Must have experience with SQL Server (preferred).
Hands-on experience with JSR 286, OSGI, JAVA, J2EE, Portlet, Hooks, AJAX, Javascript, JQuery, Freemarker, HTML5, CSS3.
Expert in any relational database with SQL Unit testing, State management, git, SVN
The theme, Layout, AUI, Liferay patching, Gradle/Maven, Band, Kaleo workflows, SOAP, and Restful web services.









