
Job Role: Social Media Intern
Location: Remote
Internship Duration: 6 Months
Working Hours: Minimum~5 hours/day
Internship Starting Date: Immediate Joiner
Company Description
Discovr AI envisions a world where brands steer how AI agents perceive, surface, and recommend their products. We are building GEO, a platform that connects businesses with superintelligence to redefine visibility in the AI age. GEO combines deep analytics with instant AI-optimized content generation, offering advanced reasoning models and real-time signals to solve the toughest engineering and discovery challenges.
Role Description
This is a remote internship role for a Social Media Intern. The Social Media Intern will assist in creating and managing social media content, developing marketing strategies, and conducting digital marketing activities. The intern will be responsible for monitoring social media platforms, engaging with followers, and analyzing engagement metrics to inform future content strategies.
What you’ll do
- Spot daily/weekly trends and turn them into content ideas
- Previously handled or experience with any of the 3 platforms (LinkedIn, X, Pinterest, IG, YouTube/Shorts)
- Write catchy captions + hashtags; follow a simple content calendar
- Post consistently and engage (comments, DMs, community prompts)
- Track basics: reach, saves, CTR, follower growth; share weekly updates
- Collaborate with design/content for assets; repurpose content across platforms
Must-have skills
- Active user of at least 2–3 of the above-mentioned platforms with a clear understanding of trends
- Copywriting
- Community engagement etiquette
- Consistent availability of ~5 hours/day
- Strong reliability and ownership

About Discovr AI
About
Building the performance layer for AI-driven browsing and connect brands with super-intelligence
Candid answers by the company
Building the performance layer for AI-driven browsing and connect brands with super-intelligence
Similar jobs
Job Title: AWS Devops Engineer – Manager Business solutions
Location: Gurgaon, India
Experience Required: 8-12 years
Industry: IT
We are looking for a seasoned AWS DevOps Engineer with robust experience in AWS middleware services and MongoDB Cloud Infrastructure Management. The role involves designing, deploying, and maintaining secure, scalable, and high-availability infrastructure, along with developing efficient CI/CD pipelines and automating operational processes.
Key Deliverables (Essential functions & Responsibilities of the Job):
· Design, deploy, and manage AWS infrastructure, with a focus on middleware services such as API Gateway, Lambda, SQS, SNS, ECS, and EKS.
· Administer and optimize MongoDB Atlas or equivalent cloud-based MongoDB solutions for high availability, security, and performance.
· Develop, manage, and enhance CI/CD pipelines using tools like AWS CodePipeline, Jenkins, GitHub Actions, GitLab CI/CD, or Bitbucket Pipelines.
· Automate infrastructure provisioning using Terraform, AWS CloudFormation, or AWS CDK.
· Implement monitoring and logging solutions using CloudWatch, Prometheus, Grafana, or the ELK Stack.
· Enforce cloud security best practices — IAM, VPC setups, encryption, certificate management, and compliance controls.
· Work closely with development teams to improve application reliability, scalability, and performance.
· Manage containerized environments using Docker, Kubernetes (EKS), or AWS ECS.
· Perform MongoDB administration tasks such as backups, performance tuning, indexing, and sharding.
· Participate in on-call rotations to ensure 24/7 infrastructure availability and quick incident resolution.
Knowledge Skills and Abilities:
· 7+ years of hands-on AWS DevOps experience, especially with middleware services.
· Strong expertise in MongoDB Atlas or other cloud MongoDB services.
· Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or AWS CDK.
· Solid experience with CI/CD tools: Jenkins, CodePipeline, GitHub Actions, GitLab, Bitbucket, etc.
· Excellent scripting skills in Python, Bash, or PowerShell.
· Experience in containerization and orchestration: Docker, EKS, ECS.
· Familiarity with monitoring tools like CloudWatch, ELK, Prometheus, Grafana.
· Strong understanding of AWS networking and security: IAM, VPC, KMS, Security Groups.
· Ability to solve complex problems and thrive in a fast-paced environment.
Preferred Qualifications
· AWS Certified DevOps Engineer – Professional or AWS Solutions Architect – Associate/Professional.
· MongoDB Certified DBA or Developer.
· Experience with serverless services like AWS Lambda, Step Functions.
· Exposure to multi-cloud or hybrid cloud environments.
Mail updated resume with current salary-
Email: etalenthire[at]gmail[dot]com
Satish; 88O 27 49 743


Senior Data Engineer
Location: Bangalore, Gurugram (Hybrid)
Experience: 4-8 Years
Type: Full Time | Permanent
Job Summary:
We are looking for a results-driven Senior Data Engineer to join our engineering team. The ideal candidate will have hands-on expertise in data pipeline development, cloud infrastructure, and BI support, with a strong command of modern data stacks. You’ll be responsible for building scalable ETL/ELT workflows, managing data lakes and marts, and enabling seamless data delivery to analytics and business intelligence teams.
This role requires deep technical know-how in PostgreSQL, Python scripting, Apache Airflow, AWS or other cloud environments, and a working knowledge of modern data and BI tools.
Key Responsibilities:
PostgreSQL & Data Modeling
· Design and optimize complex SQL queries, stored procedures, and indexes
· Perform performance tuning and query plan analysis
· Contribute to schema design and data normalization
Data Migration & Transformation
· Migrate data from multiple sources to cloud or ODS platforms
· Design schema mapping and implement transformation logic
· Ensure consistency, integrity, and accuracy in migrated data
Python Scripting for Data Engineering
· Build automation scripts for data ingestion, cleansing, and transformation
· Handle file formats (JSON, CSV, XML), REST APIs, cloud SDKs (e.g., Boto3)
· Maintain reusable script modules for operational pipelines
Data Orchestration with Apache Airflow
· Develop and manage DAGs for batch/stream workflows
· Implement retries, task dependencies, notifications, and failure handling
· Integrate Airflow with cloud services, data lakes, and data warehouses
Cloud Platforms (AWS / Azure / GCP)
· Manage data storage (S3, GCS, Blob), compute services, and data pipelines
· Set up permissions, IAM roles, encryption, and logging for security
· Monitor and optimize cost and performance of cloud-based data operations
Data Marts & Analytics Layer
· Design and manage data marts using dimensional models
· Build star/snowflake schemas to support BI and self-serve analytics
· Enable incremental load strategies and partitioning
Modern Data Stack Integration
· Work with tools like DBT, Fivetran, Redshift, Snowflake, BigQuery, or Kafka
· Support modular pipeline design and metadata-driven frameworks
· Ensure high availability and scalability of the stack
BI & Reporting Tools (Power BI / Superset / Supertech)
· Collaborate with BI teams to design datasets and optimize queries
· Support development of dashboards and reporting layers
· Manage access, data refreshes, and performance for BI tools
Required Skills & Qualifications:
· 4–6 years of hands-on experience in data engineering roles
· Strong SQL skills in PostgreSQL (tuning, complex joins, procedures)
· Advanced Python scripting skills for automation and ETL
· Proven experience with Apache Airflow (custom DAGs, error handling)
· Solid understanding of cloud architecture (especially AWS)
· Experience with data marts and dimensional data modeling
· Exposure to modern data stack tools (DBT, Kafka, Snowflake, etc.)
· Familiarity with BI tools like Power BI, Apache Superset, or Supertech BI
· Version control (Git) and CI/CD pipeline knowledge is a plus
· Excellent problem-solving and communication skills

The company is building the platform to drive global careers for millennials from emerging economies
We work at the exciting intersection of the 2 hottest trends around - edtech & fintech!
And we love that we succeed as a business while powering the dreams of talented students!
Summary :
The company building the core engineering team and looking for an Android developer who can take ownership and deliver independently.
The best candidates will check all OR many of these boxes :
● 2+ years of experience in an engineering role
● Essential skills required are Java, Android Framework
● Good to have experience in Android Architecture components (MVVM), Kotlin & Dagger
● Experience with early-stage start-up & application published in Play Store is a plus
Why is this a great opportunity for the right candidate :
● Experienced founding team
● Right to win - The founding team knows the business & its secrets inside out. We are
starting with a significant head start and a precise plan of action
● Barriers to entry - This is a specialized play with natural barriers to entry, allowing for
significant value creation for all equity holders
● Backed by marquee global investors
● Exposure to all aspects of company building - exposure to investors, fund-raising,
decision making, building the team & culture
● All of the above perks of joining a high potential company very early, along with a
competitive market salary.
Knowledge of Hadoop ecosystem installation, initial-configuration and performance tuning.
Expert with Apache Ambari, Spark, Unix Shell scripting, Kubernetes and Docker
Knowledge on python would be desirable.
Experience with HDP Manager/clients and various dashboards.
Understanding on Hadoop Security (Kerberos, Ranger and Knox) and encryption and Data masking.
Experience with automation/configuration management using Chef, Ansible or an equivalent.
Strong experience with any Linux distribution.
Basic understanding of network technologies, CPU, memory and storage.
Database administration a plus.
Qualifications and Education Requirements
2 to 4 years of experience with and detailed knowledge of Core Hadoop Components solutions and
dashboards running on Big Data technologies such as Hadoop/Spark.
Bachelor degree or equivalent in Computer Science or Information Technology or related fields.



• Coordinate cross-functionally to ensure project meets business objectives and compliance standards.
• Support test and deployment of new products and features.
• Design and implementation of continuous integration and deployment
• Ensuring responsiveness of application.
• Seeing through a project from conception to finished product.
• Meeting technical and consumer needs.
Requirements-
• At least 8 years of experience with Full stack JavaScript technologies and Majorly in to React Js .
• To work with a cross-functional software development team on highly visible strategic projects as an expert level individual contributor to the coding tasks assigned.
• Extensive HTML5, CSS, Javascript experience with at least 3 end to end projects
• Serverless Software development experience Node.js
• Front end Frameworks: React, React Native / Angular / Vue
• Handle Projects and teams single handedly.
• Strong technical and system analysis skills.
• Hands on experience with React Native, PHP Laravel and NodeJS.
• Hands-on experience in building cross-platform mobile apps using React Native
• Experience with either iOS or Android platforms is a must. Knowledge of two platforms is preferable.
• Solid understanding of Mobile application development life cycle
• Proficient in project management tools like Jira, ZOHO, etc
• Hands-on experience with Agile development practices & Agile, XP, or Scrum project methodologies
• Track record of building efficient, well-designed mobile/web applications.
• Ability to learn and apply new technologies quickly and self-directed.
• Thorough understanding of Object-Oriented principles (Analysis and Design).
• Full lifecycle development experience on large projects.
• Bachelor's degree in Computer Science (or a related field)
Skills sets and Perquisites-
• 8+ years of relevant work experience
• Experience in building products in React.js, Node.js and Mongodb
• Must have team management experience.
• Must have product management experience.
• Strong organizational and product management skills.
• Good problem solving skills.
- Manage sales operations in assigned district to achieve revenue goals.
- Supervise sales team members; the BSMs, on daily basis and provide guidance whenever needed.
- Identify skill gaps and conduct trainings to sales team.
- Work with team to implement new sales techniques to obtain profits.
- Assist in employee recruitment, promotion, retention and termination activities.
- Conduct employee performance evaluation and provide feedback for improvements.
- Contact potential customers and identify new business opportunities.
- Stay abreast with customer needs, market trends and competitors.
- Maintain clear and complete sales reports for management review.
- Build strong relationships with customers for business growth.
- Analyze sales performances and recommend improvements.
- Ensure that sales team follows company policies and procedures at all times.
- Develop promotional programs to increase sales and revenue.
- Plan and coordinate sales activities for assigned projects.

The responsibilities include -
1. Hiring a best-in-class engineering team
2. Working with bank partners to integrate APIs
3. Building a smooth and fast user experience
Salary no bar. Equity will be offered to the right candidate.






