
Foxy:
Our first product, FOXY, is an artist led commerce platform for beauty, grooming and personal care. Foxy is committed to bringing the best, latest and 100% genuine beauty products from across the world, as recommended by top influencers. We connect consumers with beauty influencers, artists and top brands, ensuring users' shopping experiences are unique. We understand that everyone's Beauty and Personal Care needs are unique, and thus our users all have a tailor made and personalised shopping experience through our platform.
Sustainability is at the core of FOXY and we are working towards creating a future with less pollution.
Category Manager
As a Category Manager, the individual would own the respective category P&L, growth strategy and brand partnerships. This is a critical role for the organization.
The responsibilities will include but are not limited to:
- Identifying the right product & brand mix for growth, bringing brands on board and building a long term synergistic relationship
- Creating a growth calendar and aligning brands, offers, marketing, curation & content to drive sales
- The person would be the business owner for his/ her categories. The role entails leveraging know how of internal and external variables to deliver sales and outreach of categories. The role entails forecasting, monitoring, understanding and reporting on the category. The role also requires the incumbent to drive strategic projects and promotions to achieve business objectives.
- Building deeper connect with brands and driving adoption of our brand platform solutions
- We are a data driven, tech first and fast growing organization and the individual needs to be performance driven, output oriented and ready for a start-up
- Category management experience with Leading Retail/ Ecommerce / beauty industry.

About EkAnek Networks
About
We are building a team of tenacious and resilient people who are passionate about tech, beauty and eCommerce. Employees are encouraged to take direct ownership of opportunities and are closely mentored by senior leadership. The team is lean, works hard and solves tough problems. We select one person to join the team for every thirty who apply. Following are the tenets of our culture.
Performance. We have a clear measurement and support structure to deliver and measure impact. This gives clarity and ensures strong delivery is rewarded disproportionately.
Learning. We fit the role to each person. Roles are clear and stretch growth in areas identified in development chats.
Ownership. Every person in the team has space and the mandate to go further in their objectives and pick up other areas of interest.
Support. Our leadership is extremely accessible and open to mentorship. We are supportive of special circumstances as well as educational opportunities.
Connect with the team
Company social profiles
Similar jobs
📍 Position : Java Architect
📅 Experience : 10 to 15 Years
🧑💼 Open Positions : 3+
📍 Work Location : Bangalore, Pune, Chennai
💼 Work Mode : Hybrid
📅 Notice Period : Immediate joiners preferred; up to 1 month maximum
🔧 Core Responsibilities :
- Lead architecture design and development for scalable enterprise-level applications.
- Own and manage all aspects of technical development and delivery.
- Define and enforce best coding practices, architectural guidelines, and development standards.
- Plan and estimate the end-to-end technical scope of projects.
- Conduct code reviews, ensure CI/CD, and implement TDD/BDD methodologies.
- Mentor and lead individual contributors and small development teams.
- Collaborate with cross-functional teams, including DevOps, Product, and QA.
- Engage in high-level and low-level design (HLD/LLD), solutioning, and cloud-native transformations.
🛠️ Required Technical Skills :
- Strong hands-on expertise in Java, Spring Boot, Microservices architecture
- Experience with Kafka or similar messaging/event streaming platforms
- Proficiency in cloud platforms – AWS and Azure (must-have)
- Exposure to frontend technologies (nice-to-have)
- Solid understanding of HLD, system architecture, and design patterns
- Good grasp of DevOps concepts, Docker, Kubernetes, and Infrastructure as Code (IaC)
- Agile/Lean development, Pair Programming, and Continuous Integration practices
- Polyglot mindset is a plus (Scala, Golang, Python, etc.)
🚀 Ideal Candidate Profile :
- Currently working in a product-based environment
- Already functioning as an Architect or Principal Engineer
- Proven track record as an Individual Contributor (IC)
- Strong engineering fundamentals with a passion for scalable software systems
- No compromise on code quality, craftsmanship, and best practices
🧪 Interview Process :
- Round 1: Technical pairing round
- Rounds 2 & 3: Technical rounds with panel (code pairing + architecture)
- Final Round: HR and offer discussion
About MyOperator
MyOperator is a Business AI Operator, a category leader that unifies WhatsApp, Calls, and AI-powered chat & voice bots into one intelligent business communication platform. Unlike fragmented communication tools, MyOperator combines automation, intelligence, and workflow integration to help businesses run WhatsApp campaigns, manage calls, deploy AI chatbots, and track performance — all from a single, no-code platform. Trusted by 12,000+ brands including Amazon, Domino's, Apollo, and Razorpay, MyOperator enables faster responses, higher resolution rates, and scalable customer engagement — without fragmented tools or increased headcount.
About the Role
We are seeking a Site Reliability Engineer (SRE) with a minimum of 2 years of experience who is passionate about monitoring, observability, and ensuring system reliability. The ideal candidate will have strong expertise in Grafana, Prometheus, Opensearch, and AWS CloudWatch, with the ability to design insightful dashboards and proactively optimize system performance.
Key Responsibilities
- Design, develop, and maintain monitoring and alerting systems using Grafana, Prometheus, and AWS CloudWatch.
- Create and optimize dashboards to provide actionable insights into system and application performance.
- Collaborate with development and operations teams to ensure high availability and reliability of services.
- Proactively identify performance bottlenecks and drive improvements.
- Continuously explore and adopt new monitoring/observability tools and best practices.
Required Skills & Qualifications
- Minimum 2 years of experience in SRE, DevOps, or related roles.
- Hands-on expertise in Grafana, Prometheus, and AWS CloudWatch.
- Proven experience in dashboard creation, visualization, and alerting setup.
- Strong understanding of system monitoring, logging, and metrics collection.
- Excellent problem-solving and troubleshooting skills.
- Quick learner with a proactive attitude and adaptability to new technologies.
Good to Have (Optional)
- Experience with AWS services beyond CloudWatch.
- Familiarity with containerization (Docker, Kubernetes) and CI/CD pipelines.
- Scripting knowledge (Python, Bash, or similar).
Why Join Us
At MyOperator, you will play a key role in ensuring the reliability, scalability, and performance of systems that power AI-driven business communication for leading global brands. You’ll work in a fast-paced, innovation-driven environment where your expertise will directly impact thousands of businesses worldwide.
- Key Responsibilities
- Manage on-time and on-budget delivery with planned profitability, consistently looking to improve quality and profitability
- Lead the onsite project teams and ensure they understand the client environment
- Responsible for Backlog growth - Existing projects + Renewals/Extensions of current projects + Rate Revisions
- Responsible for driving RFPs / Proactive bids
- Build senior and strategic relationships through delivery excellence
- Understand the client environment, issues, and priorities
- Serve as the day-to-day point of contact for the clients
Job Title: Data Engineer
Job Summary: As a Data Engineer, you will be responsible for designing, building, and maintaining the infrastructure and tools necessary for data collection, storage, processing, and analysis. You will work closely with data scientists and analysts to ensure that data is available, accessible, and in a format that can be easily consumed for business insights.
Responsibilities:
- Design, build, and maintain data pipelines to collect, store, and process data from various sources.
- Create and manage data warehousing and data lake solutions.
- Develop and maintain data processing and data integration tools.
- Collaborate with data scientists and analysts to design and implement data models and algorithms for data analysis.
- Optimize and scale existing data infrastructure to ensure it meets the needs of the business.
- Ensure data quality and integrity across all data sources.
- Develop and implement best practices for data governance, security, and privacy.
- Monitor data pipeline performance / Errors and troubleshoot issues as needed.
- Stay up-to-date with emerging data technologies and best practices.
Requirements:
Bachelor's degree in Computer Science, Information Systems, or a related field.
Experience with ETL tools like Matillion,SSIS,Informatica
Experience with SQL and relational databases such as SQL server, MySQL, PostgreSQL, or Oracle.
Experience in writing complex SQL queries
Strong programming skills in languages such as Python, Java, or Scala.
Experience with data modeling, data warehousing, and data integration.
Strong problem-solving skills and ability to work independently.
Excellent communication and collaboration skills.
Familiarity with big data technologies such as Hadoop, Spark, or Kafka.
Familiarity with data warehouse/Data lake technologies like Snowflake or Databricks
Familiarity with cloud computing platforms such as AWS, Azure, or GCP.
Familiarity with Reporting tools
Teamwork/ growth contribution
- Helping the team in taking the Interviews and identifying right candidates
- Adhering to timelines
- Intime status communication and upfront communication of any risks
- Tech, train, share knowledge with peers.
- Good Communication skills
- Proven abilities to take initiative and be innovative
- Analytical mind with a problem-solving aptitude
Good to have :
Master's degree in Computer Science, Information Systems, or a related field.
Experience with NoSQL databases such as MongoDB or Cassandra.
Familiarity with data visualization and business intelligence tools such as Tableau or Power BI.
Knowledge of machine learning and statistical modeling techniques.
If you are passionate about data and want to work with a dynamic team of data scientists and analysts, we encourage you to apply for this position.

. 3.5+ years of work experience on React.JS framework
· Has expertise in following front- end optimization techniques:
· Lazy Loading
· Asynchronous Module Definition
· Image Compression and Minification
· Other front-end tooling using Grunt / Webpack and NPM
· Familiarity with NodeJS, Jasmine / Karma and other unit testing frameworks
· Foundation data structure – Arrays, dictionaries, sets and lists
· Proficient in evaluating front end performance and measure accordingly
· Strong appetite to learn industry trends and new & emerging technologies


About Us :
Docsumo is Document AI software that helps enterprises capture data and analyze customer documents. We convert documents such as invoices, ID cards, and bank statements into actionable data. We are work with clients such as PayU, Arbor and Hitachi and backed by Sequoia, Barclays, Techstars, and Better Capital.
As a Senior Machine Learning you will be working directly with the CTO to develop end to end API products for the US market in the information extraction domain.
Responsibilities :
- You will be designing and building systems that help Docsumo process visual data i.e. as PDF & images of documents.
- You'll work in our Machine Intelligence team, a close-knit group of scientists and engineers who incubate new capabilities from whiteboard sketches all the way to finished apps.
- You will get to learn the ins and outs of building core capabilities & API products that can scale globally.
- Should have hands-on experience applying advanced statistical learning techniques to different types of data.
- Should be able to design, build and work with RESTful Web Services in JSON and XML formats. (Flask preferred)
- Should follow Agile principles and processes including (but not limited to) standup meetings, sprints and retrospectives.
Skills / Requirements :
- Minimum 3+ years experience working in machine learning, text processing, data science, information retrieval, deep learning, natural language processing, text mining, regression, classification, etc.
- Must have a full-time degree in Computer Science or similar (Statistics/Mathematics)
- Working with OpenCV, TensorFlow and Keras
- Working with Python: Numpy, Scikit-learn, Matplotlib, Panda
- Familiarity with Version Control tools such as Git
- Theoretical and practical knowledge of SQL / NoSQL databases with hands-on experience in at least one database system.
- Must be self-motivated, flexible, collaborative, with an eagerness to learn


We are Looking For a Android Developer & PHP Developer with 1 year Experience.
Experience - Minimum 10 Month -Maximum 2 Year Required Candidate -2 If anyone in your reference Looking for Same then give reference I Vision Infotech
Experience Level: 3 to 5 Years
Job Location: Hyderabad
Responsibilities
· Excellent knowledge of Core Java and Spring
· Candidate should have a working knowledge of web services
· Should have worked in the distributed agile model and continuous integration
· Should have knowledge of designing and implementation of REST Web services
· Strong experience with REST API and web services
· Should be efficient with Java J2EE and related technologies.
Essential Requirements
· Strong Core Java and spring.
· Strong RESTFUL web service experience
· Strong SQL (preferably Oracle), JQuery, HTML/CSS, Oracle Restful, SOAP web services
· B. Tech/M. Tech from Tier-1 colleges like IIT, NIT, VIT, BIT
Primary Skill: Java, spring & RESTFUL Web services,

