11+ WIP Jobs in Pune | WIP Job openings in Pune
Apply to 11+ WIP Jobs in Pune on CutShort.io. Explore the latest WIP Job opportunities across top companies like Google, Amazon & Adobe.
- 6 - 10 years’ experience in Oracle EBS (Techno-Functional)
- Experience in implementation, Support, and upgrade projects
- Experience at least in 2 R12.2 upgrade projects.
- Experience in Oracle EBS R12.2.7 or Higher versions of R12.
- Strong knowledge in SQL/PLSQL, Reports, Forms, and workflow. OAF is added advantage.
- At least 40% functional knowledge in SCM and Manufacturing (Modules: PO, OM, INV, WIP, BOM, Quality and Engineering)
- Excellent verbal and written communication skills
MUST-HAVES:
- LLM, AI, Prompt Engineering LLM Integration & Prompt Engineering
- Context & Knowledge Base Design.
- Context & Knowledge Base Design.
- Experience running LLM evals
NOTICE PERIOD: Immediate – 30 Days
SKILLS: LLM, AI, PROMPT ENGINEERING
NICE TO HAVES:
Data Literacy & Modelling Awareness Familiarity with Databricks, AWS, and ChatGPT Environments
ROLE PROFICIENCY:
Role Scope / Deliverables:
- Scope of Role Serve as the link between business intelligence, data engineering, and AI application teams, ensuring the Large Language Model (LLM) interacts effectively with the modeled dataset.
- Define and curate the context and knowledge base that enables GPT to provide accurate, relevant, and compliant business insights.
- Collaborate with Data Analysts and System SMEs to identify, structure, and tag data elements that feed the LLM environment.
- Design, test, and refine prompt strategies and context frameworks that align GPT outputs with business objectives.
- Conduct evaluation and performance testing (evals) to validate LLM responses for accuracy, completeness, and relevance.
- Partner with IT and governance stakeholders to ensure secure, ethical, and controlled AI behavior within enterprise boundaries.
KEY DELIVERABLES:
- LLM Interaction Design Framework: Documentation of how GPT connects to the modeled dataset, including context injection, prompt templates, and retrieval logic.
- Knowledge Base Configuration: Curated and structured domain knowledge to enable precise and useful GPT responses (e.g., commercial definitions, data context, business rules).
- Evaluation Scripts & Test Results: Defined eval sets, scoring criteria, and output analysis to measure GPT accuracy and quality over time.
- Prompt Library & Usage Guidelines: Standardized prompts and design patterns to ensure consistent business interactions and outcomes.
- AI Performance Dashboard / Reporting: Visualizations or reports summarizing GPT response quality, usage trends, and continuous improvement metrics.
- Governance & Compliance Documentation: Inputs to data security, bias prevention, and responsible AI practices in collaboration with IT and compliance teams.
KEY SKILLS:
Technical & Analytical Skills:
- LLM Integration & Prompt Engineering – Understanding of how GPT models interact with structured and unstructured data to generate business-relevant insights.
- Context & Knowledge Base Design – Skilled in curating, structuring, and managing contextual data to optimize GPT accuracy and reliability.
- Evaluation & Testing Methods – Experience running LLM evals, defining scoring criteria, and assessing model quality across use cases.
- Data Literacy & Modeling Awareness – Familiar with relational and analytical data models to ensure alignment between data structures and AI responses.
- Familiarity with Databricks, AWS, and ChatGPT Environments – Capable of working in cloud-based analytics and AI environments for development, testing, and deployment.
- Scripting & Query Skills (e.g., SQL, Python) – Ability to extract, transform, and validate data for model training and evaluation workflows.
- Business & Collaboration Skills Cross-Functional Collaboration – Works effectively with business, data, and IT teams to align GPT capabilities with business objectives.
- Analytical Thinking & Problem Solving – Evaluates LLM outputs critically, identifies improvement opportunities, and translates findings into actionable refinements.
- Commercial Context Awareness – Understands how sales and marketing intelligence data should be represented and leveraged by GPT.
- Governance & Responsible AI Mindset – Applies enterprise AI standards for data security, privacy, and ethical use.
- Communication & Documentation – Clearly articulates AI logic, context structures, and testing results for both technical and non-technical audiences.
Strong Podcast & Video Script writing Profile
Mandatory (Experience 1) – Must have 2+ YOE in creative scripting and narrative design for Podcasts
Mandatory (Experience 2) - Must have strong exposure to Finance industry, and should be able to simplify complex Financial subjects into engaging story-based scripts
Mandatory (Skills) – Must have hands-on expertise with AI/GenAI tools (e.g., Wondercraft, Heygen, NotebookLM, ElevenLabs) for content creation, prompting, and production at scale.
Mandatory (CTC) - The CTC breakup offered will be 80% fixed, and 20% variable, as per Company policy.
Role Overview
You will play a crucial role in performance tuning of the product, focusing on response time, load, and scalability. The role demands hands-on expertise in performance testing tools and strong troubleshooting skills to collaborate effectively with development and architecture teams.
Key Responsibilities
- Design, develop, and execute performance test scripts using tools such as JMeter, LoadRunner, or RPT.
- Conduct multi-user scenario scripting and load/stress testing.
- Analyze performance test results and provide bottleneck analysis and recommendations.
- Collaborate with developers and architects to optimize performance across response time, scalability, and throughput.
- Monitor system health during performance testing and troubleshoot performance issues.
- Document test strategy, results, and provide actionable insights to improve product performance.
- Contribute to performance tuning and capacity planning for cloud-based applications (added advantage).
Required Skills & Experience
- 6–8 years of overall engineering product testing experience.
- At least 2 years in automation testing.
- At least 2 years in performance test analysis.
- Hands-on expertise in JMeter, RPT, or LoadRunner (mandatory).
- Strong background in performance engineering with the ability to troubleshoot and solve technical issues.
- Experience in cloud-based applications (preferred).
- Knowledge of C++ coding will be an added advantage.
- Excellent skills in performance monitoring, profiling, and analysis.
- Strong communication skills for technical discussions with development and architecture teams.
- You will be participating in feature solutioning discussions and finalizing an approach aligned with requirements.
- Preparing HLD and LLD. Making reviews of design documents
- Exploring technology and hacking way through the specific problem to accomplish desired results.
- Owning a module and delivering it end to end.
- Working in a team and at times showing leadership by taking initiatives and guiding other
- You should have evaluated new technologies and introduced relevant as and when required.
- You should have built multiple systems ground up and maintained them working.
- You should have exposed to debugging client-side production setup under crisis.
Competitive Salary
About Solidatus
At Solidatus, we empower organizations to connect and visualize their data relationships, making it easier to identify, access, and understand their data. Our metadata management technology helps businesses establish a sustainable data foundation, ensuring they meet regulatory requirements, drive digital transformation, and unlock valuable insights.
We’re experiencing rapid growth—backed by HSBC, Citi, and AlbionVC, we secured £14 million in Series A funding in 2021. Our achievements include recognition in the Deloitte UK Technology Fast 50, multiple A-Team Innovation Awards, and a top 1% place to work ranking from The FinancialTechnologist.
Now is an exciting time to join us as we expand internationally and continue shaping the future of data management.
About the Engineering Team
Engineering is the heart of Solidatus. Our team of world-class engineers, drawn from outstanding computer science and technical backgrounds, plays a critical role in crafting the powerful, elegant solutions that set us apart. We thrive on solving challenging visualization and data management problems, building technology that delights users and drives real-world impact for global enterprises.
As Solidatus expands its footprint, we are scaling our capabilities with a focus on building world-class connectors and integrations to extend the reach of our platform. Our engineers are trusted with the freedom to explore, innovate, and shape the product’s future — all while working in a collaborative, high-impact environment. Here, your code doesn’t just ship — it empowers some of the world's largest and most complex organizations to achieve their data ambitions.
Who We Are & What You’ll Do
Join our Data Integration team and help shape the way data flows!
Your Mission:
To expand and refine our suite of out-of-the-box integrations, using our powerful API and SDK to bring in metadata for visualisation from a vast range of sources including databases with diverse SQL dialects.
But that is just the beginning. At our core, we are problem-solvers and innovators. You’ll have the chance to:
Design
intuitive layouts
representing flow of data across complex deployments of diverse technologies
Design and optimize API connectivity and parsers reading from source systems metadata
Explore new paradigms for representing data lineage
Enhance our data ingestion capabilities to handle massive volumes of data
Dig deep into data challenges to build smarter, more scalable solutions
Beyond engineering, you’ll collaborate with users, troubleshoot tricky issues, streamline development workflows, and contribute to a culture of continuous improvement.
What We’re Looking For
- We don’t believe in sticking to a single tech stack just for the sake of it. We’re engineers first, and we pick the best tools for the job. More than ticking off a checklist, we value mindset, curiosity, and problem-solving skills.
- You’re quick to learn and love diving into new technologies
- You push for excellence and aren’t satisfied with “just okay”
- You can break down complex topics in a way that anyone can understand
- You should have 6–8 years of proven experience in developing, and delivering high-quality, scalable software solutions
- You should be a strong self-starter with the ability to take ownership of tasks and drive them to completion with minimal supervision.
- You should be able to mentor junior developers, perform code reviews, and ensure adherence to best practices in software engineering.
Tech & Skills We’d Love to See
Must-have:·
- Strong hands-on experience with Java, Spring Boot RESTful APIs, and Node.js
- Solid knowledge of databases, SQL dialects, and data structures
Nice-to-have:
- Experience with C#, ASP.NET Core, TypeScript, React.js, or similar frameworks
- Bonus points for data experience—we love data wizards
If you’re passionate about engineering high-impact solutions, playing with cutting- edge tech, and making data work smarter, we’d love to have you on board!
You will get to design, architect and develop complex enterprise software and SaaS web applications leveraging modern web stack.
Roles & Responsibilities
Design & build highly scalable, high performance, responsive web applications.
Take full ownership and responsibility for building, shipping, and maintaining core product features, end to end. Help out in building the backend & front-end infrastructure.
Translation of requirements, designs and wireframes into high quality code. Collaborate closely with designers, engineers, founders and product managers.
Mentor team members and review their work.
You will enjoy this role if you...
Are a geek with a desire to stay ahead of the curve.
Like building beautiful well-architected software products with millions of users.
Work collaboratively as part of a close-knit team of geeks, architects and leads.
Desired Skills & Experience:
2 - 6 years of production experience with modern web frameworks - Ruby on Rails, Phoenix/Elixir and/or Django/Python.
Should have sound experience in developing scalable / distributed SaaS apps
Should have good knowledge and work experience in REST API implementations, JSON format handling, caching, sessions, multi-threading, etc.
Should be comfortable with database schema design and leveraging SQL & NoSQL (PostgreSQL, MySQL, Redis, Elasticsearch, DynamoDB)
Experience developing, consuming and transforming internal and 3rd party API's (REST and GraphQL)
Experience with code quality and reusability practices (CI/CD for back-end & front-end repos)
Solid foundation in data structures, algorithms, distributed systems, design patterns.
Strong understanding of software engineering best practices, including unit testing, code reviews, design documentation, debugging, troubleshooting, and agile development
Communication: You like discussing a plan upfront, welcome collaboration, and are an excellent verbal and written communicator.
Bachelor’s degree in Computer Science or equivalent experience.
Bonus points if you have...
Exposure to front-end technologies like React/Redux, Javascript/Typescript etc.
Cloud native development on AWS or GCP
Experience with implementation of container technologies like Docker, Kubernetes. Knowledge of continuous integration, continuous delivery and enterprise DevOps concepts.







