
Job description:
Responsibilities:
- Managing retail operations, including cash, store operations, inventory, and shelf-life management
- Recruiting, managing, motivating, and training the teams
- Driving store-level, corporate and bulk-order sales
- Managing store assets and equipment
- Executing and maintaining in-store visual merchandising
- Resolving customer concerns in a diplomatic manner
Candidate Qualifications & Skill Requirements:
- Graduation in any stream
- MBA/Diploma in Retail Management not mandatory but advantageous
- 5-8 years' experience in luxury retail, food retail, or hospitality out of which at least 2 years in a team manager role
- Excellent customer handling and communication
- Must be well organized and diligently structured in their approach to any engagement
- An uncompromising focus on execution with a no-nonsense attitude toward goals and deliverables
- Immediate joiners preferred.
Job Location: Palladium Mumbai.
Reporting Manager: Operations Manager
Timeframe: Immediate
About us
- Burgundy Brand Collective is one of India's fastest-growing specialty retail companies with stores in 9 cities. The company partners with best-in-class international luxury brands to offer Indians a window to the finest food and lifestyle themes from across the world. Our brand portfolio includes Royce’ Chocolate- a premium Japanese confectionery brand, Onitsuka Tiger - a leading Japanese fashion and lifestyle brand, Provenance Gifts – a marketplace for curated gourmet gifts, Papabubble – an artistic, youth-oriented global candy brand and Ligne Roset – a luxury French contemporary furniture brand. The plan is to aggressively (yet astutely) scale out a portfolio of international brands pan-India.

About Burgundy Brand Collective
About
Similar jobs
Job Title: Tech Lead and SSE – Kafka, Python, and Azure Databricks (Healthcare Data Project)
Experience: 4 to 12 years
Role Overview:
We are looking for a highly skilled Tech Lead with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing. This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams.
Key Responsibilities:
- Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks.
- Architect scalable data streaming and processing solutions to support healthcare data workflows.
- Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data.
- Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.).
- Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions.
- Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows.
- Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering.
- Stay updated with the latest cloud technologies, big data frameworks, and industry trends.
Required Skills & Qualifications:
- 4+ years of experience in data engineering, with strong proficiency in Kafka and Python.
- Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing.
- Experience with Azure Databricks (or willingness to learn and adopt it quickly).
- Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus).
- Proficiency in SQL, NoSQL databases, and data modeling for big data processing.
- Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications.
- Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus.
- Strong analytical skills, problem-solving mindset, and ability to lead complex data projects.
- Excellent communication and stakeholder management skills.
Please Apply - https://zrec.in/7EYKe?source=CareerSite
About Us
Infra360 Solutions is a services company specializing in Cloud, DevSecOps, Security, and Observability solutions. We help technology companies adapt DevOps culture in their organization by focusing on long-term DevOps roadmap. We focus on identifying technical and cultural issues in the journey of successfully implementing the DevOps practices in the organization and work with respective teams to fix issues to increase overall productivity. We also do training sessions for the developers and make them realize the importance of DevOps. We provide these services - DevOps, DevSecOps, FinOps, Cost Optimizations, CI/CD, Observability, Cloud Security, Containerization, Cloud Migration, Site Reliability, Performance Optimizations, SIEM and SecOps, Serverless automation, Well-Architected Review, MLOps, Governance, Risk & Compliance. We do assessments of technology architecture, security, governance, compliance, and DevOps maturity model for any technology company and help them optimize their cloud cost, streamline their technology architecture, and set up processes to improve the availability and reliability of their website and applications. We set up tools for monitoring, logging, and observability. We focus on bringing the DevOps culture to the organization to improve its efficiency and delivery.
Job Description
Job Title: Senior DevOps Engineer / SRE
Department: Technology
Location: Gurgaon
Work Mode: On-site
Working Hours: 10 AM - 7 PM
Terms: Permanent
Experience: 4-6 years
Education: B.Tech/MCA
Notice Period: Immediately
About Us
At Infra360.io, we are a next-generation cloud consulting and services company committed to delivering comprehensive, 360-degree solutions for cloud, infrastructure, DevOps, and security. We partner with clients to transform and optimize their technology landscape, ensuring resilience, scalability, cost efficiency and innovation.
Our core services include Cloud Strategy, Site Reliability Engineering (SRE), DevOps, Cloud Security Posture Management (CSPM), and related Managed Services. We specialize in driving operational excellence across multi-cloud environments, helping businesses achieve their goals with agility and reliability.
We thrive on ownership, collaboration, problem-solving, and excellence, fostering an environment where innovation and continuous learning are at the forefront. Join us as we expand and redefine what’s possible in cloud technology and infrastructure.
Role Summary
We are seeking a Senior DevOps Engineer (SRE) to manage and optimize large-scale, mission-critical production systems. The ideal candidate will have a strong problem-solving mindset, extensive experience in troubleshooting, and expertise in scaling, automating, and enhancing system reliability. This role requires hands-on proficiency in tools like Kubernetes, Terraform, CI/CD, and cloud platforms (AWS, GCP, Azure), along with scripting skills in Python or Go. The candidate will drive observability and monitoring initiatives using tools like Prometheus, Grafana, and APM solutions (Datadog, New Relic, OpenTelemetry).
Strong communication, incident management skills, and a collaborative approach are essential. Experience in team leadership and multi-client engagement is a plus.
Ideal Candidate Profile
- Solid 4-6 years of experience as an SRE and DevOps with a proven track record of handling large-scale production environments
- Bachelor's or Master's degree in Computer Science, Engineering, or a related field
- Strong Hands-on experience with managing Large Scale Production Systems
- Strong Production Troubleshooting Skills and handling high-pressure situations.
- Strong Experience with Databases (PostgreSQL, MongoDB, ElasticSearch, Kafka)
- Worked on making production systems more Scalable, Highly Available and Fault-tolerant
- Hands-on experience with ELK or other logging and observability tools
- Hands-on experience with Prometheus, Grafana & Alertmanager and on-call processes like Pagerduty
- Problem-Solving Mindset
- Strong with skills - K8s, Terraform, Helm, ArgoCD, AWS/GCP/Azure etc
- Good with Python/Go Scripting Automation
- Strong with fundamentals like DNS, Networking, Linux
- Experience with APM tools like - Newrelic, Datadog, OpenTelemetry
- Good experience with Incident Response, Incident Management, Writing detailed RCAs
- Experience with Applications best practices in making apps more reliable and fault-tolerant
- Strong leadership skills and the ability to mentor team members and provide guidance on best practices.
- Able to manage multiple clients and take ownership of client issues.
- Experience with Git and coding best practices
Good to have
- Team-leading Experience
- Multiple Client Handling
- Requirements gathering from clients
- Good Communication
Key Responsibilities
- Design and Development:
- Architect, design, and develop high-quality, scalable, and secure cloud-based software solutions.
- Collaborate with product and engineering teams to translate business requirements into technical specifications.
- Write clean, maintainable, and efficient code, following best practices and coding standards.
- Cloud Infrastructure:
- Develop and optimise cloud-native applications, leveraging cloud services like AWS, Azure, or Google Cloud Platform (GCP).
- Implement and manage CI/CD pipelines for automated deployment and testing.
- Ensure the security, reliability, and performance of cloud infrastructure.
- Technical Leadership:
- Mentor and guide junior engineers, providing technical leadership and fostering a collaborative team environment.
- Participate in code reviews, ensuring adherence to best practices and high-quality code delivery.
- Lead technical discussions and contribute to architectural decisions.
- Problem Solving and Troubleshooting:
- Identify, diagnose, and resolve complex software and infrastructure issues.
- Perform root cause analysis for production incidents and implement preventative measures.
- Continuous Improvement:
- Stay up-to-date with the latest industry trends, tools, and technologies in cloud computing and software engineering.
- Contribute to the continuous improvement of development processes, tools, and methodologies.
- Drive innovation by experimenting with new technologies and solutions to enhance the platform.
- Collaboration:
- Work closely with DevOps, QA, and other teams to ensure smooth integration and delivery of software releases.
- Communicate effectively with stakeholders, including technical and non-technical team members.
- Client Interaction & Management:
- Will serve as a direct point of contact for multiple clients.
- Able to handle the unique technical needs and challenges of two or more clients concurrently.
- Involve both direct interaction with clients and internal team coordination.
- Production Systems Management:
- Must have extensive experience in managing, monitoring, and debugging production environments.
- Will work on troubleshooting complex issues and ensure that production systems are running smoothly with minimal downtime.

Location: Remote / Hybrid (Silicon Valley)
? Job Type: Full-Time
? Experience: 4+ years
About Altimate AI
At Altimate AI, we’re revolutionizing enterprise data operations with agentic AI—intelligent AI teammates that seamlessly integrate into existing workflows, helping data teams build pipelines, automate documentation, optimize infrastructure, and accelerate delivery.
Backed by top-tier investors and led by Silicon Valley veterans, we’re on a mission to automate and streamline data workflows, allowing data professionals to focus on innovation rather than repetitive tasks.
Role Overview
We are looking for an SDET (Software Development Engineer in Test) with expertise in Python, automation, data, and AI to help ensure the reliability, performance, and scalability of our AI-powered data solutions. You will work closely with engineering and data science teams to develop test automation frameworks, validate complex AI-driven data pipelines, and integrate testing into CI/CD workflows.
Key Responsibilities
✅ Develop and maintain automation frameworks for testing AI-driven data applications
✅ Design, implement, and execute automated test strategies for data pipelines and infrastructure
✅ Validate AI-driven insights and data transformations to ensure accuracy and reliability
✅ Integrate automated tests into CI/CD pipelines for continuous testing and deployment
✅ Collaborate with engineering and data science teams to improve software quality
✅ Identify performance bottlenecks and optimize automated testing approaches
✅ Ensure data integrity and compliance with industry best practices
Required Skills & Experience
? Strong Python programming skills with experience in test automation (PyTest, Selenium, or similar frameworks)
? Hands-on experience with data testing – validating ETL pipelines, SQL queries, and AI-generated outputs
? Proficiency in modern data stacks (SQL, Snowflake, dbt, Spark, Kafka, etc.)
? Experience with CI/CD tools like Jenkins, GitHub Actions, or GitLab CI
? Familiarity with cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes)
? Excellent problem-solving and analytical skills
? Strong communication skills to work effectively with cross-functional teams
Nice-to-Have (Bonus Points)
⭐ Prior experience in a fast-paced startup environment
⭐ Knowledge of machine learning model validation and AI-driven testing approaches
⭐ Experience with performance testing and security testing for AI applications
Why Join Altimate AI?
? Cutting-Edge AI & Automation – Work with next-gen AI-driven data automation technologies
? High-Impact Role – Be part of an early-stage, fast-growing startup shaping the future of enterprise AI
? Competitive Salary + Equity – Own a meaningful stake in a company with massive potential
? Collaborative Culture – Work with top-tier engineers, AI researchers, and data experts
⚡ Opportunity for Growth – Play a key role in scaling AI-powered data operations
Experience - 2+ Years
Requirements:
● Should have at least 2+ years of DevOps experience
● Should have experience with Kubernetes
● Should have experience with Terraform/Helm
● Should have experience in building scalable server-side systems
● Should have experience in cloud infrastructure and designing databases
● Having experience with NodeJS/TypeScript/AWS is a bonus
● Having experience with WebRTC is a bonus

We are seeking a Full Stack Engineer to join the Engineering team based out of Gurgaon. We provide users with the opportunity to invest in gold, government bonds, crypto currencies and other investment products to grow their savings.
We are constantly looking to improve the investment experience and educate users about
growth opportunities. In each release, we aim to make pluang more useful for our users and add features to ensure state of the art security & reliability. Our users trust us with their hard-earned money and we take it very seriously. We consistently strive to deliver top-quality.
You will be working with a team of highly-motivated, young & dynamic teams of engineers & reporting to the Engineering Lead.
Position Responsibilities
● Be honest, reliable & consistent
● Write efficient & clean code
● Have a strong sense of ownership
● Be a part of development & maintenance of Pluang web app, Operations dashboard and other 3rd party products we own
● Contribute to improving the quality of engineering process & engineering culture
Position Requirements
- Strong in data structure and algorithms
- Experience in Java, Express, API Design & DOM
- Understanding of component based design or other design patterns
- Experience with unit testing, integration testing & continuous integration
- RDBMS and NoSQL databases preferably PostgreSQL, MongoDB
- Good to have passion for investing
We Offer
- Attractive compensation package - competitive salary, flexible bonus scheme.
- We are always looking for ways to promote and inspire innovation.
- Individual career path - management and technical career growth, enhanced by learning and development program, regular performance assessment, teams of multi-national IT professionals.
- Healthy work environment - company-sponsored medical program, food, and beverage program, open communication.
- Friendly policies to support Work-life balance, team building, and celebrations.
Role - Content Marketer
What you'll do
Be an SEO champ! Tell a product story that the Google algorithm loves. Understand our audience, their requirements and tie them with search volumes and user queries.
● Write content pieces that not just drive traffic but convert them.
● Build marketing collaterals that help leads in their decision-making process.
● You'd be expected to work with design, front-end, sales, and product .
● Work on different forms of content - blog posts, recording videos for YouTube, hosting podcasts.
Skills you bring to the table:
● You have a stronghold over SEO.
● You preferably have 2-3 years of experience working with a B2B marketing team.
● You are a great communicator, both verbal and written.
● Just like everyone in the team, you have maniacal attention to detail.
● You are a self-starter, capable of working independently.


Job Description-
Responsibilities:
* Work on real-world computer vision problems
* Write robust industry-grade algorithms
* Leverage OpenCV, Python and deep learning frameworks to train models.
* Use Deep Learning technologies such as Keras, Tensorflow, PyTorch etc.
* Develop integrations with various in-house or external microservices.
* Must have experience in deployment practices (Kubernetes, Docker, containerization, etc.) and model compression practices
* Research latest technologies and develop proof of concepts (POCs).
* Build and train state-of-the-art deep learning models to solve Computer Vision related problems, including, but not limited to:
* Segmentation
* Object Detection
* Classification
* Objects Tracking
* Visual Style Transfer
* Generative Adversarial Networks
* Work alongside other researchers and engineers to develop and deploy solutions for challenging real-world problems in the area of Computer Vision
* Develop and plan Computer Vision research projects, in the terms of scope of work including formal definition of research objectives and outcomes
* Provide specialized technical / scientific research to support the organization on different projects for existing and new technologies
Skills:
* Object Detection
* Computer Science
* Image Processing
* Computer Vision
* Deep Learning
* Artificial Intelligence (AI)
* Pattern Recognition
* Machine Learning
* Data Science
* Generative Adversarial Networks (GANs)
* Flask
* SQL
- Proficient in Solidity programming language
- Experience with wallet integrations
- Experience with ethers.js/web3.js
- Experience with Nodejs
- Experience with upgradable contracts, dynamic contracts, and factory contracts
- Design and develop smart contracts and decentralized applications on Ethereum, Polygon and Tron
- Excellent knowledge of various token standards like ERC20, ERC721 and ERC1155
- Write end-to-end unit tests for all smart contracts
Who We Are? Square Yards is a technology-enabled O2O transaction and aggregator platform for Global Real Estate. It offers a comprehensive, integrated menu of Global Property & Asset Portfolio. We have over 12,500+ satisfied customers worldwide through our direct presence in 40 cities in 10 Countries including India, UAE, Qatar, Oman, Singapore, UK, Hong Kong, Malaysia, Australia and Canada. Our range of portfolio includes Square Capital, Square Connect, Bling Events & Square Marketing Technologies Company website: https://www.squareyards.com" target="_blank">https://www.squareyards.com
Qualification : Graduation ,Post graduate /MBA Industry:
Real Estate Salary:2.5 to 8
Experience: 0-5 years
Job description:
1. Build good Working relationship among Brokers and Clients.
2. Understand the core values of the company and its goals.
3. Research the market and related products for possible business opportunities.
4. Present the product favorably and in a structured professional way face to face.
5. Maintain and Develop relationships with channel partners in person and via telephone calls and e mails.
6. Respond to incoming e mail and Phone enquiries.
7. Act as a contact between our company and its existing and potential markets.
8. Negotiate the terms of an agreement and close deals.
9. Challenge any objections with a view to getting the customer to buy.
10. Advise on forth coming product developments and discuss special promotions to the brokers

