
We, the Products team at DataWeave, build data products that provide timely insights that are readily consumable and actionable, at scale. Our underpinnings are: scale, impact, engagement, and visibility. We help
businesses take data driven decisions everyday. We also give them insights for long term strategy. We are focused on creating value for our customers and help them succeed.
How we work
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At
serious scale! Read more on Become a DataWeaver
What do we offer?
- Opportunity to work on some of the most compelling data products that we are building for online retailers and brands.
- Ability to see the impact of your work and the value you are adding to our customers almost immediately.
- Opportunity to work on a variety of challenging problems and technologies to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses, trainings, and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.
Roles and Responsibilities:
● Build a low latency serving layer that powers DataWeave's Dashboards, Reports, and Analytics
functionality
● Build robust RESTful APIs that serve data and insights to DataWeave and other products
● Design user interaction workflows on our products and integrating them with data APIs
● Help stabilize and scale our existing systems. Help design the next generation systems.
● Scale our back end data and analytics pipeline to handle increasingly large amounts of data.
● Work closely with the Head of Products and UX designers to understand the product vision and design
philosophy
● Lead/be a part of all major tech decisions. Bring in best practices. Mentor younger team members and
interns.
● Constantly think scale, think automation. Measure everything. Optimize proactively.
● Be a tech thought leader. Add passion and vibrancy to the team. Push the envelope.
Skills and Requirements:
● 5-7 years of experience building and scaling APIs and web applications.
● Experience building and managing large scale data/analytics systems.
● Have a strong grasp of CS fundamentals and excellent problem solving abilities. Have a good understanding of software design principles and architectural best practices.
● Be passionate about writing code and have experience coding in multiple languages, including at least one scripting language, preferably Python.
● Be able to argue convincingly why feature X of language Y rocks/sucks, or why a certain design decision is right/wrong, and so on.
● Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
● Have experience working with multiple storage and indexing technologies such as MySQL, Redis, MongoDB, Cassandra, Elastic.
● Good knowledge (including internals) of messaging systems such as Kafka and RabbitMQ.
● Use the command line like a pro. Be proficient in Git and other essential software development tools.
● Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
● Exposure to one or more centralized logging, monitoring, and instrumentation tools, such as Kibana, Graylog, StatsD, Datadog etc.
● Working knowledge of building websites and apps. Good understanding of integration complexities and dependencies.
● Working knowledge linux server administration as well as the AWS ecosystem is desirable.
● It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.

About Dataweave Pvt Ltd
About
Connect with the team
Similar jobs
Certa (getcerta.com) is a Silicon Valley-based startup automating the vendor, supplier, and stakeholder onboarding processes for businesses globally. Serving Fortune 500 and Fortune 1000 clients, Certa's engineering team tackles expansive and deeply technical challenges, driving innovation in business processes across industries.
Location: Remote (India only)
Role Overview
We are looking for an experienced and innovative AI Engineer to join our team and push the boundaries of large language model (LLM) technology to drive significant impact in our products and services . In this role, you will leverage your strong software engineering skills (particularly in Python and cloud-based backend systems) and your hands-on experience with cutting-edge AI (LLMs, prompt engineering, Retrieval-Augmented Generation, etc.) to build intelligent features for enterprise (B2B SaaS). As an AI Engineer on our team, you will design and deploy AI-driven solutions (such as LLM-powered agents and context-aware systems) from prototype to production, iterating quickly and staying up-to-date with the latest developments in the AI space . This is a unique opportunity to be at the forefront of a new class of engineering roles that blend robust backend system design with state-of-the-art AI integration, shaping the future of user experiences in our domain.
Key Responsibilities
- Design and Develop AI Features: Lead the design, development, and deployment of generative AI capabilities and LLM-powered services that deliver engaging, human-centric user experiences . This includes building features like intelligent chatbots, AI-driven recommendations, and workflow automation into our products.
- RAG Pipeline Implementation: Design, implement, and continuously optimize end-to-end RAG (Retrieval-Augmented Generation) pipelines, including data ingestion and parsing, document chunking, vector indexing, and prompt engineering strategies to provide relevant context to LLMs . Ensure that our AI systems can efficiently retrieve and use information from knowledge bases to enhance answer accuracy.
- Build LLM-Based Agents: Develop and refine LLM-based agentic systems that can autonomously perform complex tasks or assist users in multi-step workflows. Incorporate tools for planning, memory, and context management (e.g. long-term memory stores, tool use via APIs) to extend the capabilities of our AI agents . Experiment with emerging best practices in agent design (planning algorithms, self-healing loops, etc.) to make these agents more reliable and effective.
- Integrate with Product Teams: Work closely with product managers, designers, and other engineers to integrate AI capabilities seamlessly into our products, ensuring that features align with user needs and business goals . You’ll collaborate cross-functionally to translate product requirements into AI solutions, and iterate based on feedback and testing.
- System Evaluation & Iteration: Rigorously evaluate the performance of AI models and pipelines using appropriate metrics – including accuracy/correctness, response latency, and avoidance of errors like hallucinations . Conduct thorough testing and use user feedback to drive continuous improvements in model prompts, parameters, and data processing.
- Code Quality & Best Practices: Write clean, maintainable, and testable code while following software engineering best practices . Ensure that the AI components are well-structured, scalable, and fit into our overall system architecture. Implement monitoring and logging for AI services to track performance and reliability in production.
- Mentorship and Knowledge Sharing: Provide technical guidance and mentorship to team members on best practices in generative AI development . Help educate and upskill colleagues (e.g. through code reviews, tech talks) in areas like prompt engineering, using our AI toolchain, and evaluating model outputs. Foster a culture of continuous learning and experimentation with new AI technologies.
- Research & Innovation: Continuously explore the latest advancements in AI/ML (new model releases, libraries, techniques) and assess their potential value for our products . You will have the freedom to prototype innovative solutions – for example, trying new fine-tuning methods or integrating new APIs – and bring those into our platform if they prove beneficial. Staying current with emerging research and industry trends is a key part of this role .
Required Skills and Qualifications
- Software Engineering Experience: 3+ years (Mid-level) / 5+ years (Senior) of professional software engineering experience. Rock-solid backend development skills with expertise in Python and designing scalable APIs/services. Experience building and deploying systems on AWS or similar cloud platforms is required (including familiarity with cloud infrastructure and distributed computing) . Strong system design abilities with a track record of designing robust, maintainable architectures is a must.
- LLM/AI Application Experience: Proven experience building applications that leverage large language models or generative AI. You have spent time prompting and integrating language models into real products (e.g. building chatbots, semantic search, AI assistants) and understand their behavior and failure modes . Demonstrable projects or work in LLM-powered application development – especially using techniques like RAG or building LLM-driven agents – will make you stand out .
- AI/ML Knowledge: Prioritize applied LLM product engineering over traditional ML pipelines. Strong chops in prompt design, function calling/structured outputs, tool use, context-window management, and the RAG levers that matter (document parsing/chunking, metadata, re-ranking, embedding/model selection). Make pragmatic model/provider choices (hosted vs. open) using latency, cost, context length, safety, and rate-limit trade-offs; know when simple prompting/config changes beat fine-tuning, and when lightweight adapters or fine-tuning are justified. Design evaluation that mirrors product outcomes: golden sets, automated prompt unit tests, offline checks, and online A/Bs for helpfulness/correctness/safety; track production proxies like retrieval recall and hallucination rate. Solid understanding of embeddings, tokenization, and vector search fundamentals, plus working literacy in transformers to reason about capabilities/limits. Familiarity with agent patterns (planning, tool orchestration, memory) and guardrail/safety techniques.
- Tooling & Frameworks: Hands-on experience with the AI/LLM tech stack and libraries. This includes proficiency with LLM orchestration libraries such as LangChain, LlamaIndex, etc., for building prompt pipelines . Experience working with vector databases or semantic search (e.g. Pinecone, Chroma, Milvus) to enable retrieval-augmented generation is highly desired.
- Cloud & DevOps: Own the productionization of LLM/RAG-backed services as high-availability, low-latency backends. Expertise in AWS (e.g., ECS/EKS/Lambda, API Gateway/ALB, S3, DynamoDB/Postgres, OpenSearch, SQS/SNS/Step Functions, Secrets Manager/KMS, VPC) and infrastructure-as-code (Terraform/CDK). You’re comfortable shipping stateless APIs, event-driven pipelines, and retrieval infrastructure (vector stores, caches) with strong observability (p95/p99 latency, distributed tracing, retries/circuit breakers), security (PII handling, encryption, least-privilege IAM, private networking to model endpoints), and progressive delivery (blue/green, canary, feature flags). Build prompt/config rollout workflows, manage token/cost budgets, apply caching/batching/streaming strategies, and implement graceful fallbacks across multiple model providers.
- Product and Domain Experience: Experience building enterprise (B2B SaaS) products is a strong plus . This means you understand considerations like user experience, scalability, security, and compliance. Past exposure to these types of products will help you design AI solutions that cater to a range of end-users.
- Strong Communication & Collaboration: Excellent interpersonal and communication skills, with an ability to explain complex AI concepts to non-technical stakeholders and create clarity from ambiguity . You work effectively in cross-functional teams and can coordinate with product, design, and ops teams to drive projects forward.
- Problem-Solving & Autonomy: Self-motivated and able to manage multiple priorities in a fast-paced environment . You have a demonstrated ability to troubleshoot complex systems, debug issues across the stack, and quickly prototype solutions. A “figure it out” attitude and creative approach to overcoming technical challenges are key.
Preferred (Bonus) Qualifications
- Multi-Modal and Agents: Experience developing complex agentic systems using LLMs (for example, multi-agent systems or integrating LLMs with tool networks) is a bonus . Similarly, knowledge of multi-modal AI (combining text with vision or other data) could be useful as we expand our product capabilities.
- Startup/Agile Environment: Prior experience in an early-stage startup or similarly fast-paced environment where you’ve worn multiple hats and adapted to rapid changes . This role will involve quick iteration and evolving requirements, so comfort with ambiguity and agility is valued.
- Community/Research Involvement: Active participation in the AI community (open-source contributions, research publications, or blogging about AI advancements) is appreciated. It demonstrates passion and keeps you at the cutting edge. If you have published research or have a portfolio of AI side projects, let us know !
Perks of working at Certa.ai:
- Best-in-class compensation
- Fully-remote work with flexible schedules
- Continuous learning
- Massive opportunities for growth
- Yearly offsite
- Quarterly hacker house
- Comprehensive health coverage
- Parental Leave
- Latest Tech Workstation
- Rockstar team to work with (we mean it!)
- Write clean, scalable code using .NET programming languages.
- Developing the web based software using computer programming languages such as Asp.net, Sql Server, MVC, C# & Entity framework.
- Revise, update, re-factor and debug code.
- Participate as a team member in all phases of S/W lifecycle, including the analysis and design of S/W systems.
- Participate in integrated testing of product/ package.
- Deploying applications on client server.
- Making changes to existing web applications according to the feedback received from the end users or clients.
- Design and develop REST API’s using ASP.NET/C#.
Level of skills and experience:
5 years of hands-on experience in using Python, Spark,Sql.
Experienced in AWS Cloud usage and management.
Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow).
Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch.
Experience with orchestrators such as Airflow and Kubeflow.
Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes).
Fundamental understanding of Parquet, Delta Lake and other data file formats.
Proficiency on an IaC tool such as Terraform, CDK or CloudFormation.
Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst
Hiring PHP Developer
Job description
Should have proven work experience of 1-2 years as a PHP - Laravel based development in a competitive environment
Proficient in developing applications based on Laravel 5 or greater
Proficient in handling MySQL database. (Experience in PostgreSQL is a plus)
Experience in other PHP Frameworks like Yii, Zend , Cake PHP, CodeIgniter are added advantage
Experience in analyzing & modifying existing open source / plugin code & extensions jQuery, XML, JavaScript, HTML, CSS, AJAX
Experience in SOAP, REST and web services
About the company
It has set up a benchmark in the Medical and Health industry with its Digital revolutionary changes. It had a huge impact on Countries Education & the Health sector, as it has taken an effort to uplift & Developing Digital support in India's Medical Education with the sword of Technologies. Our products are being Designed & Developed to benefit the Medical Aspirant as well as its Country's Health Education system. With its continuous effort, many Medical Institutions have been successfully adopting a Digitalised advanced way of Teaching & Learning. Its MedWhiz LMS is very Effective & Essential for Medical Aspirants.
Experience : 3-4 Years
Responsibilities
● Integration of user-facing elements developed by a front-end developers with
server side logic
● Building reusable code and libraries for future use.
● Optimization of the application for maximum speed and scalability.
● Implementation of security and data protection.
● Design and implementation of data storage solutions.
Skills
● Basic understanding of front-end technologies and platforms, such as
JavaScript, HTML5, and CSS3.
● Proficient knowledge of a back-end programming language i.e Node
Js(Mandatory).
● Proficient Knowledge of NoSQL database i.e MongoDb (Mandatory).
● Proficient understanding of Rest API’s. (Mandatory).
● Proficient understanding of code versioning tools, such as Git.
● Basic understanding of AWS.
● User authentication and authorization between multiple systems, servers, and
environments.
● Integration of multiple data sources and databases into one system.
● Management of the hosting environment, including database administration
and scaling an application to support load changes.
● Data migration, transformation, and scripting.
● Setup and administration of backups.
● Outputting data in different formats.
● Understanding differences between multiple delivery platforms such as mobile
vs desktop, and optimizing output to match the specific platform.
● Creating database schemas that represent and support business processes.
● Implementing automated testing platforms and unit tests.
Regards
Team Merito
-
Bachelor’s or master’s degree in Computer Engineering, Computer Science, Computer Applications, Mathematics, Statistics, or related technical field. Relevant experience of at least 3 years in lieu of above if from a different stream of education.
-
Well-versed in and 3+ hands-on demonstrable experience with: ▪ Stream & Batch Big Data Pipeline Processing using Apache Spark and/or Apache Flink.
▪ Distributed Cloud Native Computing including Server less Functions
▪ Relational, Object Store, Document, Graph, etc. Database Design & Implementation
▪ Micro services Architecture, API Modeling, Design, & Programming -
3+ years of hands-on development experience in Apache Spark using Scala and/or Java.
-
Ability to write executable code for Services using Spark RDD, Spark SQL, Structured Streaming, Spark MLLib, etc. with deep technical understanding of Spark Processing Framework.
-
In-depth knowledge of standard programming languages such as Scala and/or Java.
-
3+ years of hands-on development experience in one or more libraries & frameworks such as Apache Kafka, Akka, Apache Storm, Apache Nifi, Zookeeper, Hadoop ecosystem (i.e., HDFS, YARN, MapReduce, Oozie & Hive), etc.; extra points if you can demonstrate your knowledge with working examples.
-
3+ years of hands-on development experience in one or more Relational and NoSQL datastores such as PostgreSQL, Cassandra, HBase, MongoDB, DynamoDB, Elastic Search, Neo4J, etc.
-
Practical knowledge of distributed systems involving partitioning, bucketing, CAP theorem, replication, horizontal scaling, etc.
-
Passion for distilling large volumes of data, analyze performance, scalability, and capacity performance issues in Big Data Platforms.
-
Ability to clearly distinguish system and Spark Job performances and perform spark performance tuning and resource optimization.
-
Perform benchmarking/stress tests and document the best practices for different applications.
-
Proactively work with tenants on improving the overall performance and ensure the system is resilient, and scalable.
-
Good understanding of Virtualization & Containerization; must demonstrate experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox, Vagrant, etc.
-
Well-versed with demonstrable working experience with API Management, API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption.
Hands-on experience with demonstrable working experience with DevOps tools and platforms viz., Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory, Terraform, Ansible/Chef/Puppet, Spinnaker, etc.
-
Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in any categories: Compute or Storage, Database, Networking & Content Delivery, Management & Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstrable Cloud Platform experience.
-
Good understanding of Storage, Networks and Storage Networking basics which will enable you to work in a Cloud environment.
-
Good understanding of Network, Data, and Application Security basics which will enable you to work in a Cloud as well as Business Applications / API services environment.
Minimum 5-7years of professional experience building web-applications
Strong experience in backend development
Strong experience in AI/Machine Learning development, Data Analysis/Processing
Experience in creating robust and secure REST APIs
Hands-on exposure to RDBMS like PostgreSQL, MySQL, MariaDB etc.
Proficiency with one or more programming languages from Java, Python, Node/JavaScript
Solid familiarity working with cloud and related technologies, AWS, GCP and Azure cloud environments
Ability to do very quick research in unknown technologies
Startup mindset, comfort with chaos and multi-tasking ability
Strong programming fundamentals in Data Structures, Algorithms
An eye for writing performance optimum code in any tool set
We’re looking for a Senior Backend Engineer to help us build the tools, services, and applications that will enable us to be the planet’s most patient-focused pharmacy. You will work on projects ranging from greenfield initiatives to matured products with an active user base.
What you’ll do:
- Collaborate with engineering team to design and implement products using event driven micro-services architecture.
- Interact with Product Managers and key stake-holders on regular basis to understand and execute Product vision.
- Take charge of end-to-end feature development, right from proof-of-concept to production deployment and support
- Brainstorm and lead continuous improvement projects for product
- Providing feedback to peer developers on code quality and development standards.
- Mentoring team on strong coding and design standards.
- Setting the bar high for development practises while striking the right balance between pragmatism and perfection for code and processes.
- Effectively document API's and services using Swagger/OpenAPI, visualize flow and dependencies using diagrams, and create technical documents on engineering wikis.
- Present crisp and clear feature demos to stakeholders.
Some of our opensource projects- https://meet.google.com/linkredirect?authuser=0&dest=https%3A%2F%2Fgithub.com%2Fmedly">https://github.com/medly
Your tool-belt:
We don’t expect anybody to be an expert on all of these, but you should be deeply familiar with some, and a self-starting learner who isn’t afraid to ask for help:
- Polyglot Development (at least 1 more lang apart from Java and JavaScript)
- At least one JVM language
- Functional Programming,
- OO Design, Clean Code, SOLID principles
- Solid grasp on HTTP, and REST
- Web based SaaS product development
- Deployment to AWS or any cloud
- Test Driven Development,
- Security Compliance like HIPAA, PCI-DSS, SOC2
- Git, Linux, CI/CD, Gradle, IDEA
- Security based on OAuth2 / OIDC
- SQL and NoSQL DBs like PostgreSQL, DynamoDB, ElasticSearch
- Docker
- SQL Migration tools like Liquibase, Flyway
- Semantic Versioning, Feature Toggles, PR, Feature Branches
What you`ll need:
- 3+ Years of experience mainly on Backend Development using any two languages (Java, Kotlin, NodeJS, Ruby etc.)
- Minimum a bachelor degree in Computer Science, Engineering or any related field.
You will work on the following:
- Develop service/APIs using Kotlin/Micronaut Node/Serverless
- Aurora Serverless PostgreSQL DB
- AWS, Lambda, API Gateway, Amplify, Cognito, Okta
- Github, Github Actions, SonarCloud
- Database versioning using tools like Liquibase
- Terraform to manage infra
- Cloudwatch/AWS X-Ray to monitor the infra
- Opportunity to make Open Source Contribution
You will be a part of: Supply Chain Management (SCM)
Myntra-Jabong Supply Chain Management systems form the backbone of our core business and customer experience. Any business runs on a simple construct of Demand (Consumer) and Supply (Producer). However, a set of complex and intricate methods, processes and systems connect the demand with supply in a deterministic and predictable way. These methods, processes and systems collectively form the Supply Chain for the business. The multi-billion-dollar Myntra-Jabong business fundamentally rests on a set of highly scalable, robust and intelligent Supply Chain Management systems that solve real-world problems of predicting the demand from millions of our customers, for a combination of millions of products from our product catalogue, and intelligently connecting that demand to thousands of national and international sellers or suppliers using a set of advanced homegrown tech products that we build and manage.
SCM engineering employs new-age technologies such as Distributed Computing constructs, Machine Learning, Deep Learning, Computer Vision, Artificial Intelligence; scalable data stores in Mongo, Redis, Cassandra, MySQL, Elastic Search, Solr; scalable programming constructs in Node.js, GoLang, Java; JavaScript, Python, and new-age frameworks such as ReactJS and ReactNative to solve some of the hardest problems in the e-commerce business
with world-class software products.
The SCM engineering at Myntra-Jabong operates within two distinct verticals: Supply-chain Outbound (Fulfilment systems) & Supply-chain Inbound (Selection systems, Partner experience).
Your Responsibilities:
● Own the architecture of Myntra’s new product platforms to drive business results
● Be a visible leader to drive and own the architecture and design of some of the most advanced & complex software systems / products in the industry to create company wide impact.
● Help build, mentor and coach a team of very talented Engineers, Architects, Quality engineers, System Operation Engineers and DevOps engineers in architectural and design best practices.
● Be an operational and technical leader with a passion for distributed systems, cloud service
development, deployment and delivery.
● Be accountable for the design, for the ease of evolution, quality of the systems, performance, scaling, and availability characteristics and limitations of the systems.
● Envision and develop the long-term architectural direction, with emphasis on platforms/ reusable components while adopting a nimble delivery process. Establish structures and processes that ensure a high level of quality and reliability and extensibility of deliverables.
● Drive the creation of next generation extensible web, mobile and fashion commerce platforms, security protocols, customisation and tools to support continuous scaling, internationalization and platform extensions
● Drive code and design reviews of components / systems / products in scope and drives the
architectural governance for them
● Set directional paths for the teams/department for adoption of new technology stacks for solving business problems
● Be a very visible representative of multiple technology domains and represent Myntra in external technical forums
● Work with product management, business stakeholders and other engineering leaders to help define mid-term, long-term roadmaps and shape business directions
● Initiate and deliver leadership training within the engineering organisation, including training new managers, and drive the growth of leaders to create a strong leadership bench
Desired Skills and Experience
● 12 - 16 years of experience in software product development
● Must have a degree in Computer Science or related field
● A solid engineer at heart with excellent abstraction, coding and system design skills
● Proven track record of leading the architecture and delivery in a startup/e-commerce ecosystem within a high growth & matrix environment
● Successfully architected and led technology for consumer-facing products in the global market along with being an efficient proficient problem-solver who envisions business and technical perspectives to develop workable solutions
● Must have exposure to leading product development end-end (portfolio to delivery, re-architectures)
● Strong hands-on technology experience building and running large scale systems handling
multi-million sessions/transactions per day
● Solid experience in large scale Database systems like RDBMS & NoSQL stores
● Strong design/development experience in building massively large scale distributed internet systems and products
● Excellent programming skills in Java/GO and expertise in multi-threading and performance-oriented programming
● Solid experience in Distributed systems, highly scalable products, performance & reliability
● Excellent understanding of processing platforms and queues
● Experience and knowledge of open source software, frameworks and broader cutting edge
technologies around server-side development in Java
● Strong understanding of object-oriented programming, concurrency and fundamentals of
computer-science







