
Their services are available across the globe, with over 65% of their client base being from US, UK, and Canada. The company's primary focus is on Ayurveda and taking the ancient knowledge to anyone who wishes to bring back balance to their health and apply the tools in their everyday life.
- Working with marketing team for creatives, digital marketing for getting traffic on the key product types
- Working with Performance marketing team for better coordination on push, sms, email, facebook/ google
- Merchandising - Liaising with merchandising team to ensure we know what sells and that is served at right position (Sorting) - which customer sees what in each category - all data backed experiments
- Product Flow - Ensuring product flow is smooth, each time weeding out excess options, adding new relevant categories on L1, grid etc
- Monitoring and taking relevant actions for driving visits and conversions
- Working with assisted Sales team to drive better conversions, SOPs etc.
- Managing inventory for online for each category, sub category - planning ahead of sale etc
- Liaising with brand spoc for planning for peak marketing for margin/ inventory, Clearance sale stock with revised margins, return clauses etc
- Experience of 6+ years of managing a category in an e-commerce set-up or large FMCG companies
- Business development mindset and strong operational coordination skills
- Good negotiation skills
- Proficient in MS Office
- Attention to detail and comfortable working in a ground-up business environment.
- Team handling
- Strong experience in conversion improvement in an online environment, preferred
- Product management experience (not necessary a hard core product experience but with an understanding of customer journeys, basic UI/UX) would be an added advantage.
- Proven track record of scale-up and contributing to the revenue numbers.
- Understanding and ability to contribute to the various performance metrics (User/ Vendor/ Inventory/ Invoicing) across both online and offline channels.

Similar jobs
- Experience:
- 7+ years of experience in ETL development using IBM DataStage.
- Hands-on experience with designing, developing, and maintaining ETL jobs for data warehousing or business intelligence solutions.
- Experience with data integration across relational databases (e.g., IBM DB2, Oracle, MS SQL Server), flat files, and other data sources.
- Technical Skills:
- Strong proficiency in IBM DataStage (Designer, Director, Administrator, and Manager components).
- Expertise in SQL and database programming (e.g., PL/SQL, T-SQL).
- Familiarity with data warehousing concepts, data modeling, and ETL/ELT processes.
- Experience with scripting languages (e.g., UNIX shell scripting) for automation.
- Knowledge of CI/CD tools (e.g., Git, BitBucket, Artifactory) and Agile methodologies.
- Familiarity with IBM Watsonx.data integration or other ETL tools (e.g., Informatica, Talend) is a plus.
- Experience with big data technologies (e.g., Hadoop) is an advantage.
- Soft Skills:
- Excellent problem-solving and analytical skills.
- Strong communication and interpersonal skills to collaborate with stakeholders and cross-functional teams.
- Ability to work independently and manage multiple priorities in a fast-paced environment.
Description
We are seeking a skilled and detail-oriented Software Developer to automate our internal workflows, develop tools for internal use that are used by our development team.
We follow the following practices: unit testing, continuous integration CI, continuous deployment CD, and DevOps.
We have codebases in go, java, python, vue js, bash and support the development team that develops C code.
You need to like challenges, explore new fields and find solutions for problems.
You will be responsible for coordinating, automating, and validating internal workflows and ensuring operational stability, and system reliability.
Requirements
- Bachelor’s degree in Computer Science, Engineering, or related field.
- 2+ years in professional software development
- Solid understanding of software development patterns like SOLID, GoF or similar.
- Experience automating deployments for different kinds of applications.
- Strong understanding of Git version control, merge/rebase strategies, tagging.
- Familiarity with containerization (Docker) and deployment orchestration (e.g., docker compose).
- Solid scripting experience (bash, or similar).
- Understanding of observability, monitoring, and probing tooling (e.g., Prometheus, Grafana, blackbox exporter).
Preferred Skills
- Experience in SRE
- Proficiency in CI/CD tooling (e.g., GitHub Actions, Jenkins, GitLab).
- Familiarity with build tools like Make, CMake, or similar.
- Exposure to artifact management systems (e.g., aptly, Artifactory, Nexus).
- Experience deploying to Linux production systems with service uptime guarantees.
Responsibilities
- Develop new services that are needed by SRE, Field or Development Team by adopting unit testing, agile, clean code practices.
- Drive the CI/CD pipeline and maintain the workflows, using tools such as GitLab, Jenkins
- Deploy the services and implement and refine the automation for different environments.
- Operate: The services that the SRE Team developed.
- Automate release pipelines: Build and maintain CI/CD workflows using tools such as Jenkins and GitLab.
- Version control: Manage and enforce Git best practices, branching strategies (e.g., Git Flow), tagging, and release versioning.
- Collaboration: Work closely with developers, QA, and product teams to align on release timelines and feature readines
- Success Metrics
- Achieve >99% service up time with minimal rollbacks.
- Delivery in time, hold timelines.
Benefits
Enjoy a great environment, great people, and a great package
- Stock Appreciation Rights - Generous pre series-B stock options
- Generous Gratuity Plan - Long service compensation far exceeding Indian statutory requirements
- Health Insurance - Premium health insurance for employee, spouse and children
- Working Hours - Flexible working hours with sole focus on enabling a great work environment
- Work Environment - Work with top industry experts in an environment that fosters co-operation, learning and developing skills
- Make a Difference - We're here because we want to make an impact on the world - we hope you do too!
Why Join RtBrick
Enjoy the excitement of a start-up without the risk!
We're revolutionizing the Internet's backbone by using cutting-edge software development techniques. The internet and, more specifically, broadband networks are among the most world's most critical technologies, that billions of people rely on every day. Rtbrick is revolutionizing the way these networks are constructed, moving away from traditional monolithic routing systems to a more agile, disaggregated infrastructure and distributed edge network functions. This shift mirrors transformations seen in computing and cloud technologies, marking the most profound change in networking since the inception of IP technology.
We're pioneering a cloud-native approach, harnessing the power of container-based software, microservices, a devops philosophy, and warehouse scale tools to drive innovation.
And although RtBrick is a young innovative company, RtBrick stands on solid financial ground: we are already cash-flow positive, backed by major telco investors like Swisscom Ventures and T-Capital, and our solutions are actively deployed by Tier-1 telcos including Deutsche Telekom (Europe's largest carrier), Regional ISPs and City ISPs—with expanding operations across Europe, North America and Asia.
Joining RtBrick offers you the unique thrill of a startup environment, coupled with the security that comes from working in a business with substantial market presence and significant revenue streams.
We'd love you to come and join us so why don't you embrace the opportunity to be part of a team that's not just participating in the market but actively shaping the future of telecommunications worldwide
Mandatory (Experience 1): Must have 5+ years of hands-on designing experience especially in print and visual communication projects.
Mandatory (Experience 2): Must have proven conceptualisation and visualization skills, with the ability to convert briefs, raw data, and ideas into engaging visual creatives while maintaining brand messaging and design consistency.
Mandatory (Experience 3): Must have client interaction and team management experience, including coordinating with designers, art directors, project managers, and clients to ensure timely and high-quality creative deliverables.
Mandatory (Tech Skills 1): Must have strong knowledge of design principles, typography, color theory, layout design, and print production standards, with the ability to sign-off or review final deliverables.
Mandatory (Tech Skills 2): Must have hands-on experience in handling multiple creative tools and design workflows independently
Mandatory (Portfolio): Strong portfolio of Print Media / Creative Portfolio showcasing branding, marketing collaterals, brochures, presentations, or other high-quality visual design projects that demonstrate conceptual thinking and software proficiency.
Mandatory (Work Environment): Must be willing to work full-time from office and be flexible for rotational monthly shifts (including night shifts) as per business requirements.
This role is responsible for architecting and implementing the Agentic capabilities of the PHI ecosystem. The engineer will lead the development of multi-agent systems, enabling seamless interoperability between AI agents, internal tools, and external services.
The position requires a strong focus on AI safety, secure agent orchestration, and tool-connected AI systems capable of executing complex workflows within the health insurance domain.
1. Agent Orchestration
- Build and manage autonomous AI agents using Agent Development Kit (ADK) and Vertex AI Agent Engine.
- Design and implement multi-agent workflows capable of handling complex tasks.
2. Interoperability
- Implement the Model Context Protocol (MCP) to enable connectivity between:
- AI agents
- Internal PHI tools
- External services and APIs.
3. Multimodal Development
- Build real-time, bidirectional audio applications using the Gemini Live API.
- Integrate image generation models and support multimodal AI capabilities.
4. Safety Engineering
- Implement AI safety layers to protect sensitive healthcare data.
- Use Model Armor and Cloud DLP API to:
- Sanitize prompts
- Prevent exposure of PII/PHI data
- Enforce secure AI interactions.
5. Agent-to-Agent (A2A) Communication
- Configure remote agent connectivity using the A2A SDK.
- Enable cross-agent collaboration and workflow orchestration.
Must-Have Skills
- Advanced proficiency with Agent Development Kit (ADK).
- Strong experience with Vertex AI Agent Engine.
- Hands-on experience with Model Context Protocol (MCP).
- Experience implementing Agent-to-Agent (A2A) workflows using the A2A SDK.
- Expertise in Google Gen AI SDK for Python.
- Experience building multimodal AI applications.
- Proven experience implementing AI safety layers, including:
- Model Armor
- Cloud DLP API
Good-to-Have Skills (Foundation)
Data & Analytics
- BigQuery optimization techniques, including:
- Partitioning
- Clustering
- Denormalization for performance and cost optimization.
Streaming & Real-Time Pipelines
- Experience building real-time data pipelines using:
- Google Pub/Sub
- BigQuery streaming pipelines
- Generate revenue by pitching prospects and converting them to sales.
- Meet and exceed pipeline contribution goals
- Respond quickly to assigned leads
- Willingness to be measured by weekly and monthly metrics.
- Use and become an expert on the Customer Relationship Management (CRM) system.
- Fully Responsible for meeting targets as assigned Periodically
- Comfortable with cold calling
- Experience from Event Industry and Artist Management space is a must
- Brilliant Negotiations Skills
- Good Communication Skills, Smart Worker
- Flexible, Able To Shift Priorities To Accommodate Changing Demands
- The ability to break the ice, and engage in extensive networking & socializing at various platforms
- Experience in working on CRM (eg- PipeDrive, Salesforce, Zoho) would be a plus
At Cypherock, We are disrupting the current financial system by increasing the adoption of Blockchain-based digital assets through better key management solutions. We build "Worlds' first products" from India, work at the intersection of Blockchain, Security, Embedded Hardware and Cryptography, and have worked with companies like Google, Blockgeeks, Samsung, Lockheed Martin, Apollo Munich, Bank of America amongst others.
As the primary person responsible for everything Blockchain in the company, we think it will be a great fit if -
- You love everything Crypto and are passionate to create the World's safest Crypto wallet.
- You have MERN stack & DevOps experience on AWS.
- You have reasonable open-source development experience and have shipped production-ready code.
- You can commit for at least 6 months and you are ideally in your 4th year of college and willing to join full time after the internship if mutually agreed.
If we decide to work together, we believe you would be a key team member who helps in the mass adoption of Crypto for the first billion users.
-
Owns the end to end implementation of the assigned data processing components/product features i.e. design, development, dep
loyment, and testing of the data processing components and associated flows conforming to best coding practices -
Creation and optimization of data engineering pipelines for analytics projects.
-
Support data and cloud transformation initiatives
-
Contribute to our cloud strategy based on prior experience
-
Independently work with all stakeholders across the organization to deliver enhanced functionalities
-
Create and maintain automated ETL processes with a special focus on data flow, error recovery, and exception handling and reporting
-
Gather and understand data requirements, work in the team to achieve high-quality data ingestion and build systems that can process the data, transform the data
-
Be able to comprehend the application of database index and transactions
-
Involve in the design and development of a Big Data predictive analytics SaaS-based customer data platform using object-oriented analysis
, design and programming skills, and design patterns -
Implement ETL workflows for data matching, data cleansing, data integration, and management
-
Maintain existing data pipelines, and develop new data pipeline using big data technologies
-
Responsible for leading the effort of continuously improving reliability, scalability, and stability of microservices and platform












