
Lead I - Software Engineering - AI Solutions Analyst
at Global Digital Transformation Solutions Provider
MUST-HAVES:
- LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
- Context & Knowledge Base Design.
- Context & Knowledge Base Design.
- Experience running LLM evals
NOTICE PERIOD: Immediate – 30 Days
SKILLS: LLM, AI, PROMPT ENGINEERING
NICE TO HAVES:
Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments
ROLE PROFICIENCY:
Role Scope / Deliverables:
- Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
- Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
- Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
- Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
- Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
- Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.
KEY DELIVERABLES:
- LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
- Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
- Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
- Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
- AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
- Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.
KEY SKILLS:
Technical & Analytical Skills:
- LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
- Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
- Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
- Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
- Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
- Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
- Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
- Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
- Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
- Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
- Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.

Similar jobs
Role: Java Backend developer
Location: Bangalore
- 4+ years of industrial experience
- Experience in Core Java 1.8 and above, Data Structures, OOPS, Multithreading, Microservices, Spring, Kafka
- Should have the ability to analyse, design, develop and test complex, low-latency client-facing applications.
- Good development experience with RDBMS
- Good knowledge of multi-threading and high volume server side development
- Excellent problem solving and coding skills in Java
- Strong interpersonal, communication and analytical skills.
Key Responsibilities
- Develop, test, and maintain applications using .Net/C# and Azure technologies.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Ensure the performance, quality, and responsiveness of applications.
- Identify and correct bottlenecks and fix bugs.
- Help maintain code quality, organization, and automation.
Job Description:
Minimum 6+ years of experience building global enterprise level systems
Extensive experience developing complex distributed event-based microservices using C#/.Net Core
Knowledge with containerisation (Docker, Kubernetes)
Experience with cloud platforms (Azure, AWS)
Exposure with distributed messaging / streaming platforms (Apache Kafka)
Involvement building CI/CD pipelines (ideally Azure DevOps)
Excellent knowledge of Relational Databases SQL and No-SQL databases
Integration Developer
ROLE TITLE
Integration Developer
ROLE LOCATION(S)
Bangalore/Hyderabad/Chennai/Coimbatore/Noida/Kolkata/Pune/Indore
ROLE SUMMARY
The Integration Developer is a key member of the operations team, responsible for ensuring the smooth integration and functioning of various systems and software within the organization. This role involves technical support, system troubleshooting, performance monitoring, and assisting with the implementation of integration solutions.
ROLE RESPONSIBILITIES
· Design, develop, and maintain integration solutions using Spring Framework, Apache Camel, and other integration patterns such as RESTful APIs, SOAP services, file-based FTP/SFTP, and OAuth authentication.
· Collaborate with architects and cross-functional teams to design integration solutions that are scalable, secure, and aligned with business requirements.
· Resolve complex integration issues, performance bottlenecks, and data discrepancies across multiple systems. Support Production issues and fixes.
· Document integration processes, technical designs, APIs, and workflows to ensure clarity and ease of use.
· Participate in on-call rotation to provide 24/7 support for critical production issues.
· Develop source code / version control management experience in a collaborative work environment.
TECHNICAL QUALIFICATIONS
· 5+ years of experience in Java development with strong expertise in Spring Framework and Apache Camel for building enterprise-grade integrations.
· Proficient with Azure DevOps (ADO) for version control, CI/CD pipeline implementation, and project management.
· Hands-on experience with RESTful APIs, SOAP services, and file-based integrations using FTP and SFTP protocols.
· Strong analytical and troubleshooting skills for resolving complex integration and system issues.
· Experience in Azure Services, including Azure Service Bus, Azure Kubernetes Service (AKS), Azure Container Apps, and ideally Azure API Management (APIM) is a plus.
· Good understanding of containerization and cloud-native development, with experience in using Docker, Kubernetes, and Azure AKS.
· Experience with OAuth for secure authentication and authorization in integration solutions.
· Strong experience level using GitHub Source Control application.
· Strong background in SQL databases (e.g., T-SQL, Stored Procedures) and working with data in an integration context.
· Skilled with Azure DevOps (ADO) for version control, CI/CD pipeline implementation, and project management.
· Experience in Azure Services, including Azure Service Bus, Azure Kubernetes Service (AKS), Azure Container Apps, and ideally Azure API Management (APIM) is a plus.
GENERAL QUALIFICATIONS
· Excellent analytical and problem-solving skills, with a keen attention to detail.
· Effective communication skills, with the ability to collaborate with technical and non-technical stakeholders.
· Experience working in a fast paced, production support environment with a focus on incident management and resolution.
· Experience in the insurance domain is considered a plus.
EDUCATION REQUIREMENTS
· Bachelor’s degree in Computer Science, Information Technology, or related field.
Data Scientist
Job Title: Data Scientist – Data and Artificial Intelligence
Location: Hyderabad
Job Type: Full-time
Company Description:
Qylis is a leading provider of innovative IT solutions, specializing in Cloud, Data & AI,
and Cyber Security. We help businesses unlock the full potential of these technologies
to achieve their goals and gain a competitive edge. Our unique approach focuses on
delivering value through bespoke solutions tailored to customer specific needs. We are
driven by a customer-centric mindset and committed to delivering continuous value
through intellectual property accelerators and automation. Our team of experts is
passionate about technology and dedicated to making a positive impact. We foster an
environment of growth and innovation, constantly pushing the boundaries to deliver
exceptional results. Website: www.qylis.com, LinkedIn:
www.linkedin.com/company/qylis
Job Summary
We are an engineering organization collaborating directly with clients to address
challenges using the latest technologies. Our focus is on joint code development with
clients' engineers for cloud-based solutions, accelerating organizational progress.
Working with product teams, partners, and open-source communities, we contribute to
open source, striving for platform improvement. This role involves creating impactful
solution patterns and open-source assets. As a team member, you'll collaborate with
engineers from both teams, applying your skills and creativity to solve complex
challenges and contribute to open source, while fostering professional growth.
Responsibilities
• Researching and developing production-grade models (forecasting, anomaly
detection, optimization, clustering, etc.) for global cloud business by using
statistical and machine learning techniques.
• Manage large volumes of data, and create new and improved solutions for data
collection, management, analyses, and data science model development.
• Drive the onboarding of new data and the refinement of existing data sources
through feature engineering and feature selection.
• Apply statistical concepts and cutting-edge machine learning techniques to
analyze cloud demand and optimize data science model code for distributed
computing platforms and task automation.
• Work closely with other data scientists and data engineers to deploy models that
drive cloud infrastructure capacity planning.
• Present analytical findings and business insights to project managers,
stakeholders, and senior leadership and keep abreast of new statistical /
machine learning techniques and implement them as appropriate to improve
predictive performance.
• Oversees the analysis of data and leads the team in identifying trends, patterns,
correlations, and insights to develop new forecasting models and improve
existing models.
• Leads collaboration among team and leverages data to identify pockets of
opportunity to apply state-of-the-art algorithms to improve a solution to a
business problem.
• Consistently leverages knowledge of techniques to optimize analysis using
algorithms.
• Modifies statistical analysis tools for evaluating Machine Learning models.
Solves deep and challenging problems for circumstances such as when model
predictions are not correct, when models do not match the training data or the
design outcomes when the data is not clean when it is unclear which analyses to
run, and when the process is ambiguous.
• Provides coaching to team members on business context, interpretation, and the
implications of findings. Interprets findings and their implications for multiple
businesses, and champions methodological rigour by calling attention to the
limitations of knowledge wherever biases in data, methods, and analysis exist.
• Generates and leverages insights that inform future studies and reframe the
research agenda. Informs both current business decisions by implementing and
adapting supply-chain strategies through complex business intelligence.
Qualifications
• M.Sc. in Statistics, Applied Mathematics, Applied Economics, Computer
Science or Engineering, Data Science, Operations Research or similar applied
quantitative field
• 7+ years of industry experience in developing production-grade statistical and
machine learning code in a collaborative team environment.
• Prior experience in machine learning using R or Python (scikit / numpy / pandas /
statsmodel).
• Prior experience working on Computer Vision Project is an Add on
• Knowledge on AWS and Azure Cloud.
• Prior experience in time series forecasting.
• Prior experience with typical data management systems and tools such as SQL.
• Knowledge and ability to work within a large-scale computing or big data context,
and hands-on experience with Hadoop, Spark, DataBricks or similar.
• Excellent analytical skills; ability to understand business needs and translate
them into technical solutions, including analysis specifications and models.
• Experience in machine learning using R or Python (scikit / numpy / pandas /
statsmodel) with skill level at or near fluency.
• Experience with deep learning models (e.g., tensorflow, PyTorch, CNTK) and solid
knowledge of theory and practice.
• Practical and professional experience contributing to and maintaining a large
code base with code versioning systems such as Git.
• Creative thinking skills with emphasis on developing innovative methods to solve
hard problems under ambiguity.
• Good interpersonal and communication (verbal and written) skills, including the
ability to write concise and accurate technical documentation and communicate
technical ideas to non-technical audiences.
Numadic is hiring a Fullstack Developer
We are Numads
Drawn to the unknown, we are new age nomads who seek to bring near what is far. We work as full stack humans, able to operate independently while enjoying the journey together. We see past the sandlines of clan and craft and value the unique and special talents of each. We think, we design, we code, we write, we share, we care and we ride together. We aim to live by our values of Humility, Collaboration and Transformation.
We undisrupt vehicle payments
To impact a highly fragmented v-commerce space, we aim to bring order to simplify & aggregate. We are a full stack v-commerce platform. We build the Network side of the products to achieve dense on-ground digital coverage by working with & aggregating different types of partners. Further help set the standards for scaling sustainably for the future. We also build the User side of the products to make road travel experience for our vehicle owners and drivers contactless and fully autonomous.
About the role:
- Meet both technical and consumer needs.
- Design client-side and server-side architecture that can scale to thousands of end users.
- Ensure cross-platform optimisation and responsiveness of applications.
- Write and review technical documentation.
- Diagnose and fix bugs and performance bottlenecks.
- Maintain code and write automated tests to ensure the product is of the highest quality.
- Create security and data protection processes.
- Design and develop efficient APIs.
- Ability to quickly adapt and migrate code to most current technologies.
Why is the opportunity exciting
We are a startup and provide an opportunity to be a part of a fast growing company. With full ownership, you will have the direct ability to make a difference and lead teams. You will work and learn from among a diverse group of Numads. You will solve first-to-market problems to be taken globally. We are based out of Goa and we offer a great opportunity to work from one of the most beautiful parts of India.
Role requirements:
- 2+ years of proven experience working as a Fullstack Developer
- Proficiency with multiple front-end languages and frameworks (e.g. HTML, CSS, JavaScript, React, React Native)
- Knowledge of back-end languages (Node JS)
- Knowledge of AWS, Firebase, GIT and CI/CD tools.
- Hands-on experience with databases (e.g. Postgres, MongoDB).
- Bias for action - ability to move quickly while taking time out to review the details.
- Clear communicator - Ability to synthesise and clearly articulate complex information, highlighting key takeaways and actionable insights.
- Team player - Working mostly autonomously, yet being a team player keeping your crews looped-in.
- Education - Degree in Computer Science or relevant field or relevant experience.
- Mindset - Ability to take responsibility for your life and that of your people and projects.
- Mindfulness - Ability to maintain practices that keep you grounded.
Join Numadic
From the founders to our investors and advisors, what we share is a common respect for the value of human life and of meaningful relationships. We are full-stack humans, who work with full-stack humans and seek to do business with full-stack humans. We have turned down projects, when we found misalignment of values at the other end of the table. We do not believe that the customer is always right. We believe that all humans are equal and that the direction of the flow of money should not define the way people are treated. This is life at Numadic.
JD Code: SHI-LDE-01
Version#: 1.0
Date of JD Creation: 27-March-2023
Position Title: Lead Data Engineer
Reporting to: Technical Director
Location: Bangalore Urban, India (on-site)
SmartHub.ai (www.smarthub.ai) is a fast-growing Startup headquartered in Palo Alto, CA, and with offices in Seattle and Bangalore. We operate at the intersection of AI, IoT & Edge Computing. With strategic investments from leaders in infrastructure & data management, SmartHub.ai is redefining the Edge IoT space. Our “Software Defined Edge” products help enterprises rapidly accelerate their Edge Infrastructure Management & Intelligence. We empower enterprises to leverage their Edge environment to increase revenue, efficiency of operations, manage safety and digital risks by using Edge and AI technologies.
SmartHub is an equal opportunity employer and will always be committed to nurture a workplace culture that supports, inspires and respects all individuals, encourages employees to bring their best selves to work, laugh and share. We seek builders who hail from a variety of backgrounds, perspectives and skills to join our team.
Summary
This role requires the candidate to translate business and product requirements to build, maintain, optimize data systems which can be relational or non-relational in nature. The candidate is expected to tune and analyse the data including from a short and long-term trend analysis and reporting, AI/ML uses cases.
We are looking for a talented technical professional with at least 8 years of proven experience in owning, architecting, designing, operating and optimising databases that are used for large scale analytics and reports.
Responsibilities
- Provide technical & architectural leadership for the next generation of product development.
- Innovate, Research & Evaluate new technologies and tools for a quality output.
- Architect, Design and Implement ensuring scalability, performance and security.
- Code and implement new algorithms to solve complex problems.
- Analyze complex data, develop, optimize and transform large data sets both structured and unstructured.
- Ability to deploy and administrator the database and continuously tuning for performance especially container orchestration stacks such as Kubernetes
- Develop analytical models and solutions Mentor Junior members technically in Architecture, Designing and robust Coding.
- Work in an Agile development environment while continuously evaluating and improvising engineering processes
Required
- At least 8 years of experience with significant depth in designing and building scalable distributed database systems for enterprise class products, experience of working in product development companies.
- Should have been feature/component lead for several complex features involving large datasets.
- Strong background in relational and non-relational database like Postgres, MongoDB, Hadoop etl.
- Deep exp database optimization, tuning ertise in SQL, Time Series Databases, Apache Drill, HDFS, Spark are good to have
- Excellent analytical and problem-solving skill sets.
- Experience in for high throughput is highly desirable
- Exposure to database provisioning in Kubernetes/non-Kubernetes environments, configuration and tuning in a highly available mode.
- Demonstrated ability to provide technical leadership and mentoring to the team
• Needs to have clear, concise communication skills, both written and verbally, must speak slowly and clearly
• Needs to be able to answer incoming calls from customers and create tickets, assist with requests, be able to call third party vendors to open tickets and manage from creation to completion
• Needs to be able to follow multiple different processes depending on the circumstances
• Avaya Communication Manager including G650 – Triage alarms, Troubleshooting breakfix issues and simple/complex MAC
• Avaya Media Gateways: G430, G450, G700, G350 – Triage alarms, troubleshooting breakfix issues
• System Manager/Session Manager (SIP) – Triage alarms, Troubleshooting breakfix issues including SIP and simple/complex MAC • AES – Triage alarms, Troubleshooting breakfix issues
• Avaya ACSS certification
• Avaya SBC – Triage alarms, Troubleshooting breakfix issues
• CMS - Triage alarms, Troubleshooting breakfix issues and simple/complex MAC • Avaya One X Agent/Communicator- Troubleshooting breakfix issues
• AVST - Triage alarms, Troubleshooting breakfix issues and simple/complex MAC
• IX messaging - Triage alarms, Troubleshooting breakfix issues and simple/complex MAC
• AAM - Triage alarms, Troubleshooting breakfix issues and simple/complex MAC
• Intuity Audix - Triage alarms, Troubleshooting breakfix issues and simple/complex MAC MM- Triage alarms, Troubleshooting breakfix issues and simple/complex MAC
• CMM- Triage alarms, Troubleshooting breakfix issues and simple/complex MAC
• Utility Servers- Triage alarms, troubleshooting breakfix issues and simple/complex MAC
• Avaya Aura Media Server- Triage alarms, troubleshooting breakfix issues
Java Architect/Technical Lead-
Responsibilities-
- Deliver technical strategies and solutions, application development, and end-users services
- Lead projects to implement new or enhance existing functionalities including articulating requirements and translating them into effective technical solutions
- Build and maintain strong relationships and partnerships with team members across globe and customer stakeholders
- Work collaboratively as a team player to develop mutually acceptable solutions, and act as a mentor to the team
- Solve problems and provide support, taking responsibility to make decisions when appropriate
- Effectively communicate with team members and customers, Be able to articulate technology to a non-technical audience
- Build relationships with technical team members in all offices
- Track and manage dependencies across lines of business and platforms, Maintain technical risks register with mitigations
- Create or maintain architectural artifacts like architectural overview, architectural decisions, integrations diagram, non-functional requirements
- Collaborate with other technical team members and create a forum for sharing knowledge
- Come up with innovative ideas to improve quality in delivery
- Deploy, administer and support Microservices components in a RedHat OpenShift environment
- Should be hands-on and be ready to write complex code while mentoring junior developers to learn
- Thrive in a fast-paced and dynamic work environment and lead by example
Skills & Experience-
- 15-20 years experience working on projects utilizing: Java, Web Development/REST (preferably with Spring), Linux, JUnit, Mocking, Maven, JavaScript, GIT
- Experience in the following areas are preferred – Microservices, Spring Boot, Spring Security with OAuth2, SOAP and SOAP UI, Jenkins configuration, AngularJS, RedHat PaaS (RHEL, Kubernetes, OpenShift, Camel, Fuse)
- Well versed in front end and back end development
- Good understanding of encryption technology
- Knowledge of infrastructure especially cloud infrastructure is desirable
- Should be conversant with DevOps methodologies
- At least 3 years of experience as a technical lead – coaching, presenting architectural/design elements to customers and team members
- Should possess strong problem solving and analytical skills
- Should have experience reviewing code for team members and etching out a concrete code quality plan
- Should have at least two years of experience working on projects using Test Driven Development (TDD)
- Should have 2+ years of experience using JUnit on projects
- Consistent incorporation of best practices and standards while coding; in particular: secure coding practices
- Good Exposure to building secure and scalable applications
- At least 4+ years handling customers directly should have presented architecture artifacts and technical solutions to customers and influenced customers
- Excellent communication (verbal and written) and interpersonal skills – should be able to influence customer stakeholders and senior management
- Team player and ability to work proactively, provide mentorship and directions to the team and making technical decisions independently
Qualification-
- BE/MCA or equivalent area
- 15-20 years of software industry experience
- 6+ years Java coding experience with immediate joiner
- Will be working closely with Product and Technology team
- Very good logical thinking and quick at learning and exploring new frameworks
- Hands-on with technology and providing POC
- Strong background culture of delivering projects with First time right and near-zero defects in production
Skill Sets:
- Java Spring Boot
- Micro Services architecture
- J2EE, JDBC, ORM Frameworks, JPA, NoSQL
This is an upscaling company. Your recruitment would be based on only one factor, how easily you're able to convince me over the phone.
You're one of the initial employees of the company, so your responsibilities shall revolve around end to end sale and marketing of resumes, along with maintaining client relations.
Once onboarded, your payscale will depend on the kind of traffic and value you are bringing to the organization.
Here's wishing you good luck.












