Key responsibilities:
- Strategize and execute influencer onboarding/ marketing
- Relationship management with influencers, agencies, and power users.
- Create a feedback loop for the product team to improve app utility & performance
- Collaborate with the marketing team to grow the app community
Mandatory experience:
- 2 to 4 years of experience in community building/sales teams of high-growth consumer apps in India.
- Experience working with Indian /global social media influencers for platform onboarding/ profile management/ influencer marketing campaigns
- Experience in working with Live video apps a plus Love for social media and short video apps. Self-usage is a big plus
- tier 1 university/ institute

Similar jobs
Job Title: AI Architect
Location: [Pune, Onsite]
Experience: 6+ Years
Employment Type: Full-Time
Working requirements:
Engagement Commitment: 3 months to start with to ensure that the engagement is progressing well.
Deliverable: Minimum Viable Product is already there, enhancement is required
Working Hours Preferable: California Time (which is ~12 hours behind IST), requesting this because then it will be easier to huddle and both US and India team can work simultaneously
About the Role:
We are seeking an experienced and visionary AI Architect to lead the design, development, and deployment of cutting-edge AI/ML solutions. The ideal candidate will have a strong foundation in artificial intelligence, machine learning, data engineering, and cloud technologies, with the ability to architect scalable and high-performing systems.
Key Responsibilities:
- Design and implement end-to-end AI/ML architectures for large-scale enterprise applications.
- Lead AI strategy, frameworks, and roadmaps in collaboration with product and engineering teams.
- Guide the development of machine learning models (e.g., NLP, Computer Vision, Predictive Analytics).
- Define best practices for model training, validation, deployment, monitoring, and versioning.
- Collaborate with data engineers to build and maintain robust data pipelines.
- Choose the right AI tools, libraries, and platforms based on project needs (e.g., TensorFlow, PyTorch, Hugging Face).
- Work with cloud platforms (AWS, Azure, GCP) to deploy and manage AI models and services.
- Ensure AI/ML solutions comply with data privacy, governance, and ethical standards.
- Mentor junior AI engineers and data scientists.
Required Skills & Qualifications:
- Bachelor's or Master’s degree in Computer Science, Data Science, AI/ML, or a related field.
- 6+ years of experience in AI/ML, with at least 2+ years in an architecture or lead role.
- Strong experience with AI/ML frameworks: TensorFlow, PyTorch, Scikit-learn, etc.
- Deep understanding of LLMs, transformers, GPT models, and fine-tuning techniques.
- Proficiency in Python and data processing libraries (Pandas, NumPy, etc.).
- Experience with cloud-based AI services (AWS SageMaker, Azure ML, Vertex AI, etc.).
- Knowledge of MLOps practices, CI/CD for models, and model monitoring.
- Familiarity with data lakehouse architecture, real-time inference, and APIs.
- Strong communication and leadership skills.
Preferred Qualifications:
- Experience with generative AI applications and prompt engineering.
- Knowledge of reinforcement learning or federated learning.
- Publications or contributions to open-source AI projects.
- AI certifications from cloud providers (AWS, Azure, GCP).
Why Join Us?
- Work on transformative AI projects across industries.
- Collaborate with a passionate and innovative team.
- Flexible work environment with remote/hybrid options.
- Continuous learning, upskilling, and growth opportunities.
Key Responsibilities
Performance Optimization & Query Tuning
• Tune query performance for PostgreSQL, EdgeDB, MongoDB Atlas, and Snowflake.
• Analyze execution plans, identify bottlenecks, and implement indexing & caching strategies.
• Work with engineering teams to optimize schema design for OLTP (Postgres/EdgeDB) and metadata storage (MongoDB).
Database Architecture & Scalability
• Design multi-tenant scaling strategies for PostgreSQL and MongoDB Atlas (schema, sharding, partitioning).
• Implement high-availability, replication, and clustering configurations.
• Optimize Snowflake warehouse configurations for query speed and cost control.
Operational Excellence
• Plan and execute zero-downtime database upgrades and schema migrations.
• Set up proactive monitoring, alerting, and anomaly detection across all database systems.
• Manage capacity planning for storage and compute resources across Postgres, MongoDB Atlas, and Snowflake.
Storage & Cost Optimization
• Reduce storage costs via archiving, partitioning, compression, and lifecycle policies.
• Optimize Snowflake compute and storage usage with warehouse tuning and data pruning.
• Implement tiered storage strategies for cold vs. hot data.
Security, Compliance & Governance
• Enforce encryption, access controls, and audit logging across all databases.
• Ensure compliance with GDPR, SOC 2, and other relevant regulations.
Collaboration & Knowledge Sharing
• Partner with backend, platform, and data engineering teams to ensure efficient database usage.
• Provide training and documentation on query best practices and schema design.
Qualifications
Required:
• 7+ years DBA experience, with deep expertise in PostgreSQL and MongoDB Atlas.
• Strong understanding of multi-tenant architectures in production.
• Experience with Snowflake query optimization, warehouse tuning, and cost management.
• Proven success in executing zero-downtime upgrades and large-scale migrations.
• Strong skills in query optimization, indexing, partitioning, and sharding.
• Proficiency in scripting (Python, Bash, SQL) for automation.
• Hands-on experience with monitoring tools (pg_stat_statements, Atlas monitoring, Snowflake Resource Monitors, Prometheus, Grafana).
Nice to Have:
• Experience with EdgeDB or graph/relational hybrid databases.
• Familiarity with Kubernetes-based database deployments (StatefulSets, Operators).
• Background in distributed caching (Redis, Memcached).
Company Description
BeBetta is a gamified platform designed for gamers who crave excitement, engagement, and real-world rewards. By playing games and making live predictions, users earn BetCoins, which can be redeemed for tangible prizes. Our unique approach blends gaming, predictions, and rewards, driving an immersive experience that revolutionizes user engagement. We are a high-growth, data-driven, and gamified tech startup committed to innovation and impact.
The Opportunity:
BeBetta is building the future of fan engagement. To do this, we need a backend that can handle millions of concurrent users making real-time predictions during live events. This requires a shift in our technology towards systems built for massive scale and low latency.
That’s where you come in. We are looking for a Senior Backend Engineer to lead our transition to a Go-based microservices architecture. You will be the driving force behind our most critical systems—the prediction engine, the rewards ledger, the real-time data pipelines. While our roots are in Node.js, our future is in Go, and you will be instrumental in building that future.
What You'll Achieve:
- Architect our core backend in Golang: You will design and build the services that are the backbone of the BeBetta experience, ensuring they are blazingly fast and incredibly reliable.
- Solve hard concurrency problems: You'll tackle challenges unique to real-time gaming and betting, ensuring fairness and accuracy for thousands of simultaneous user actions.
- Drive technical strategy: You will own the roadmap for evolving our architecture, including the thoughtful migration of essential services from Node.js to Go.
- Elevate the engineering bar: Through mentorship, exemplary code, and architectural leadership, you will help make our entire team better.
- Ship with impact: You will see your work go live quickly, directly enhancing the experience for our growing user base.
What You'll Bring:
- A track record of building and deploying high-performance backend systems in Golang.
- Senior-level experience (4+ years) in system design, microservices, and API development.
- Pragmatic experience with Node.js and an understanding of how to manage and migrate a monolithic or service-based system.
- Deep knowledge of database principles (PostgreSQL preferred) and high-performance data access patterns (using tools like Redis).
- Expertise in modern infrastructure: Docker, Kubernetes, and a major cloud provider (GCP/AWS).
- A strong belief that testing, observability, and clean architecture are not optional.
- An innate curiosity and a passion for solving complex problems, whether they're in code or on a whiteboard.
Why You'll Love Working Here:
This isn't just another backend role. This is a chance to put your fingerprint on the foundational technology of a fast-growing company in the exciting world of sports tech and gaming. You'll have the autonomy to make big decisions and the support of a team that's all-in on the mission.
• Bachelor’s or master’s degree in Computer Engineering,
Computer Science, Computer Applications, Mathematics, Statistics or related technical field or
equivalent practical experience. Relevant experience of at least 3 years in lieu of above if from a
different stream of education.
• Well-versed in DevOps principals & practices and hands-on DevOps
tool-chain integration experience: Release Orchestration & Automation, Source Code & Build
Management, Code Quality & Security Management, Behavior Driven Development, Test Driven
Development, Continuous Integration, Continuous Delivery, Continuous Deployment, and
Operational Monitoring & Management; extra points if you can demonstrate your knowledge with
working examples.
• Hands-on experience with demonstrable working experience with DevOps tools
and platforms viz., Slack, Jira, GIT, Jenkins, Code Quality & Security Plugins, Maven, Artifactory,
Terraform, Ansible/Chef/Puppet, Spinnaker, Tekton, StackStorm, Prometheus, Grafana, ELK,
PagerDuty, VictorOps, etc.
• Well-versed in Virtualization & Containerization; must demonstrate
experience in technologies such as Kubernetes, Istio, Docker, OpenShift, Anthos, Oracle VirtualBox,
Vagrant, etc.
• Well-versed in AWS and/or Azure or and/or Google Cloud; must demonstrate
experience in at least FIVE (5) services offered under AWS and/or Azure or and/or Google Cloud in
any categories: Compute or Storage, Database, Networking & Content Delivery, Management &
Governance, Analytics, Security, Identity, & Compliance (or) equivalent demonstratable Cloud
Platform experience.
• Well-versed with demonstrable working experience with API Management,
API Gateway, Service Mesh, Identity & Access Management, Data Protection & Encryption, tools &
platforms.
• Hands-on programming experience in either core Java and/or Python and/or JavaScript
and/or Scala; freshers passing out of college or lateral movers into IT must be able to code in
languages they have studied.
• Well-versed with Storage, Networks and Storage Networking basics
which will enable you to work in a Cloud environment.
• Well-versed with Network, Data, and
Application Security basics which will enable you to work in a Cloud as well as Business
Applications / API services environment.
• Extra points if you are certified in AWS and/or Azure
and/or Google Cloud.
KIME Careers is an Ed-Tech company that deals in distance learning programs and enrols students for the same for Universities. We bridge the distance between a working professional and college. with an aim to help a working professional decide the right career path towards excellence and growth through the best University Portfolio and prepare them to derive exceptional skills and competence to sustain in ever-evolving and challenging markets.
PROFILE – BUSINESS DEVELOPMENT EXECUTIVE ROLES & RESPONSIBILITIES- • Identifying opportunities for new Business Development through Lead Generation. • Co-ordinate pre-sales and post-sales follow up. • Presenting our product to potential clients. • Closing sales and working with the client through the closing process. • Building long term trusting relationships with clients. • Achieving Monthly targets. • Creating and maintaining a database of prospect clients; maintain a database ( Salesforce, Excel ) of prospective client information • Inside sales & Outdoor meetings if required. SKILLS REQUIRED
• Good Communication and Presentation skills. • Enthusiastic and Spontaneous. • Passionate about Sales.
QUALIFICATIONS REQUIRED-
Graduates/ Post Graduates.
Work Location: Hyderabad
Experience:9 to 12 Years
Package:Upto 24 LPA
Notice Period:Immediate Joiners.
Constraints - Rotational shifts, males are preferred
Job description -
--Interaction with Clients, Team and Management.
--Strong minimum of 5 years of experience in Java Development and Implementation.
--Leading a Team, checking performance & Resource Planning
--Provide guidance and insight to upper management and procure buy-in
--Report progress, including any changes made to plans and production
--Contribute to product design and establishment of requirements
--Delegate technical responsibilities and monitor the progress of projects
--Deliver products consistently, on time, and on budget
--Oversee user testing and report results—adjust requirements as needed
--Work closely with project manager during all phases of the development lifecycle
--Review all work produced by the development team
--Ensure code produced meets company standards
CommerceIQ is a well funded fast growing enterprise SaaS platform that is helping brands grow and sell more on e-commerce channels through its machine learning technology. Are you excited about building distributed crawling engine at global scale that will crawl and parse 1000’s of websites with 10+ million crawls on a daily basis? Would you enjoy building something as ambitious as “Google/Facebook Ad platform” for Amazon (and other e-commerce retailers)? Does building a CI/CD and containerisation framework that will allow our products to be released and deployed every week across dozens of geographies and data centers seamlessly excite you? Do you find building machine learning models that will optimize billions of dollars in ad and promotions spend exhilarating? Do you find it super exciting to build a plug and play product UI platform where leaders of brands will spend hours daily (almost as much as an email inbox) to optimize their business? We can keep writing, but you will get the idea.
In our journey of building and scaling CommerceIQ, Engineers and data scientists in our team tackle these and many more problems daily. If you are as excited as we are after reading this, we would love to talk to you! 30+ global brands including Kellogg, Unilever, Johnson & Johnson, MARS, Nestle, Logitech and many more trust our product to manage their growth on Amazon. If you are excited about building product that will write the script for how brands sell and grow on e-commerce channels please reach out to us.
Are you ready to power intelligent commerce? @CommerceIQ, you will :
• You will be responsible for developing, testing and releasing features within time and with high quality that will drive revenue and margin impact to top brands.
• You will be designing your own features keeping in mind the scale and high availability of the systems. • You will be working with the team and expected to perform code reviews, conduct design discussions and mentor other developers.
•You will be the owner of your feature and work directly with product teams to drive customer impact.
• You will be expected to participate in all phases of the software development cycle as part of a Scrum team.
Experience : 3+ Years in developing in designing and developing complex and scalable software modules..
Skillset : • The ideal candidate will be an experienced Java developer with exceptional software system design, problem solving, and object-oriented coding skills
• Experience with distributed transaction-processing systems or asynchronous messaging technology is required.
• Good understanding of system performance trade-offs, load balancing, and engineering for high availability.
• Obsessed about building quality software and owning end to end responsibility for the developed features.
• Understanding of enterprise information systems, service oriented architectures, and operational data stores is a plus
• BS or MS in Computer Science/Engineering, Mathematics, Statistics or similar degree from a top tier institution









