
Good to have- Azure cloud, Spark sql.
1. Will be the point of contact for reporting/data management needs and will be responsible to drive complex data analysis and propose improvement opportunities in existing tools and reports.

Similar jobs
Position: Lead Database Engineer
Experience: 3+ Years
Mandatory Skills: Amazon Aurora PostgreSQL, MySQL, NoSQL, AWS RDS, DMS, Performance Tuning, Replication Strategies, Query Optimization
Job Description:
- Database Administration: Manage and administer Amazon Aurora PostgreSQL, MySQL, and NoSQL databases, ensuring high availability, performance, and security.
- AWS Expertise: Proficient in AWS services such as RDS, Aurora, and DMS, with a strong understanding of cloud database architectures.
- Optimization: Implement optimization strategies at the database, query, collection, and table levels for PostgreSQL/MySQL. Expertise in performance monitoring and tuning.
- Configuration Management: Configure and fine-tune RDS parameter groups to enhance database performance and security.
- Monitoring & Troubleshooting: Proactively monitor database performance, identify bottlenecks, and perform root cause analysis (RCA) for issues.
- Access Control: Manage PostgreSQL/MySQL user permissions and roles, ensuring strict data security and access control protocols.
- Replication Strategies: Hands-on experience with master-master and master-slave replication setups.
- Collaboration: Work closely with development teams to provide guidance on database design, query optimization, and performance tuning.
- Backup & Recovery: Establish and maintain robust backup and recovery procedures to ensure data integrity and availability.
- Database Upgrades: Perform major version upgrades for MySQL and PostgreSQL to leverage new features and improvements.
- Problem Solving: Troubleshoot and resolve complex database performance issues, implementing innovative and effective solutions.
Note: Expertise in database performance tuning, AWS database services, and advanced replication strategies is essential.
Process Name: US LLS ( Line Language Selection ) - WORK FROM HOME
Company - Teleperformance
Roles and Responsibility : Will be working as a language interpreter whereas their job would be to interpret between the customer and the client . Interpreter is responsible for handling telephone or video calls on demand and renders meaning of conversations in the consecutive mode of interpretation between speakers of French or Nepali or Bengali or Urdu along with Excellent English,
Criteria: Age limit 40 Max
Qualification : Graduates or Undergraduates can Apply
Work History : Freshers and Experienced both can Apply.
Skills Required:
Excellent communication in English as well as in their regional language
Should have good knowledge of computer.
Should be flexible with rotational shifts.
Should be comfortable for video conference with clients.
Should have WIFI , Laptop / Desktop / Smartphone
Perks and Benefits:
Permanent Work from Home
System provided by the company
5 Days Working with rotational week off.
Pan India Hiring for Work from home.
Salary Details :
Bengali upto 5.5 LPA
Nepali upto 5.5 LPA
French : 10 LPA
Urdu Salary 30K a month
Selection Criteria :
HR Round
Online Assessment in both languages (Client Amcat)
Scoring Marks 3+ for both languages
Test can be performed on any device.
Ops Round on Microsoft Teams
Key Responsibilities
Identify and bid on relevant projects on platforms like Upwork, Freelancer, Fiverr, Guru, and PeoplePerHour.
Write compelling business proposals, cover letters, and client responses to secure projects.
Engage with potential clients, understand their requirements, and provide tailored solutions.
Negotiate terms, finalize contracts, and ensure smooth project onboarding.
Maintain strong follow-ups with leads and nurture relationships for long-term business growth.
Collaborate with the technical team to ensure the successful execution of projects.
Stay updated with market trends and competitors to strategize effectively.
Required Skills & Qualifications
✅ 2+ years of experience in bidding and proposal writing.
✅ Proven track record of successfully acquiring projects from Upwork, Freelancer, and similar platforms.
✅ Excellent written and verbal communication skills.
✅ Strong understanding of IT services, web development, and digital solutions.
✅ Ability to negotiate deals and handle client queries professionally.
✅ Strong analytical and problem-solving skills to create effective proposals.
Why Join Us?
🚀 Exciting Growth Opportunities – Work with international clients and high-value projects.
💰 Attractive Incentives – Performance-based bonuses and rewards.
🤝 Collaborative Team Culture – Work with experienced professionals in a dynamic environment.
🏡 Skill Enhancement – Continuous learning and development opportunities.
Junior PHP Developer (Full-Time)
Malad, Mumbai (Mindspace) | Work from Office
We’re hiring a Junior PHP Developer at Websites.co.in, a platform where small businesses create their website in 2 minutes.
Your role
- Develop and maintain backend logic using PHP (Laravel or Core PHP)
- Write clean, reusable, and efficient code
- Work with MySQL databases (queries, joins, optimization)
- Integrate REST APIs and troubleshoot backend issues
- Collaborate with frontend, QA, and product teams for feature implementation
- Participate in code reviews, testing, and deployment activities
- Debug production issues and provide quick fixes
What we expect
- Hands-on development experience with PHP (mandatory)
- Strong knowledge of MySQL, queries, and database structures
- Understanding of MVC architecture (Laravel preferred)
- Basic knowledge of HTML, CSS, JavaScript
- Familiarity with Git version control
- Problem-solving mindset and willingness to take ownership
- 0–3.5 years of experience (freshers with strong projects are welcome)
Good to have
- Experience working with APIs, JSON, cURL
- Understanding of server basics (Linux, Apache, hosting environments)
What you get
- Real product ownership, not agency project hopping
- Direct collaboration with CTO and senior devs
- Steep learning curve in a fast-moving SaaS environment
Senior Data Engineer
Pls apply here:
tinyurl [dot] com/ysk8w2eu
About Discovered Labs
At Discovered Labs we work with $10M - $50M ARR companies to help them get more leads, users and customers from Google, Bing and AI assistants such as ChatGPT, Claude and Perplexity.
We approach marketing the way engineers approach systems: data in, insights out, feedback loops everywhere. Every decision traces back to measurable outcomes. Every workflow is designed to eliminate manual bottlenecks and compound over time.
High-level overview of our approach:
- Data-driven automation: We treat marketing programs like products. We instrument everything, automate the repetitive, and focus human effort on high-leverage problems.
- First principles thinking: We don't copy what others do. We understand the underlying mechanics of how search and AI systems work, then build solutions from that foundation.
- Full-stack ownership: SEO and AEO rarely work as isolated tasks. We work across the entire funnel and multiple surface areas to ensure we own the outcome and clients win.
The Team
We're a deeply technical team building the SpaceX of the AEO & SEO space. You'll work alongside engineers who have built fraud engines powering Stripe, Plaid, and Coinbase; developed self-driving car systems at Aurora; and conducted AI research at Stanford. We don't have layers of management. You'll work directly with founders who can go deep on architecture, code, and product.
This Role
Own the data infrastructure behind automated reporting, AI visibility monitoring, competitive intelligence, and proactive alerting across a growing multi-tenant client base.
The hard problem is operational complexity, less so petabyte scale volume. Many clients, each with multiple data sources, different schemas, different API rate limits, different failure modes, different freshness requirements. When one breaks, it can't take down everyone else. Fault isolation, graceful degradation, and per-tenant reliability are built in from the start.
This is largely greenfield. You'll be building out monitoring, observability, data quality layers and pipeline orchestration.
You report to the CTO and work closely with product engineers who build the features that consume your data layer. You'll define interfaces and data contracts together. There's no platform team. You own your infrastructure, your CI, and your monitoring.
What You'll Do
- Multi-tenant data infrastructure. Ingestion, validation, and transformation across multiple data sources. Fault isolation, schema variation, and graceful upstream failure handling.
- Third-party API integration. Most of our data comes from external APIs with their own auth flows, rate limits, pagination quirks, and breaking changes. You'll build robust, resilient connectors that handle all of this gracefully across many client accounts.
- Data quality systems. Automated checks on distributions, volumes, null rates, and freshness. Statistical validation, not just schema validation. Bad data doesn't make it downstream.
- Data observability. Freshness monitoring, volume anomaly detection, schema drift detection, lineage tracking, blast radius analysis. You know the difference between "the code ran" and "the data is correct."
- Alerting design. Not just dashboards. Threshold tuning, noise reduction, avoiding alert fatigue. Mean time to detection is a core metric for this role.
- Freshness SLAs. Define them per source, build infrastructure to meet them, alert before they breach.
- Event-driven trigger infrastructure. Surface performance changes, quality regressions, and freshness violations as events for downstream systems.
- Entity data models. Design schemas for client, competitor, and content entities. Own schema evolution and backward compatibility.
- Operational environment. CI/CD, containers, deployment pipelines, credential management. Every deploy passes CI before production.
The Ideal Person for This Role
- A builder who ships. You care about getting working systems into production, not endless planning or polish. You've built data infrastructure people actually rely on.
- An operator, not just an architect. You don't just design systems, you run them. You find satisfaction in making things reliable, not just making them work once.
- An owner. You take responsibility for outcomes, not just tasks. When a pipeline you built breaks at 3am, you fix it and make sure it doesn't break again.
- Humble and curious. You acknowledge what you don't know, ask good questions, and genuinely want to learn. You take feedback as a gift, not a threat.
- A first-principles thinker. You understand why things work, not just how. You can go five levels deep on schema decisions, validation strategies, and architecture tradeoffs.
- Always improving. You're not satisfied with "good enough." You actively seek ways to get better at your craft and make systems better over time.
Requirements
- 4+ years in data engineering, platform engineering, or infrastructure-heavy backend work.
- Python, SQL, pipeline orchestration (Airflow, Dagster, Prefect, or similar).
- Event-driven architectures or real-time data processing.
- Third-party API integration. You've built resilient connectors against external APIs with rate limits, auth flows, pagination, and breaking changes. Not just calling endpoints, but handling the full operational reality.
- Pipeline fundamentals. Idempotent pipelines, backfill strategies, and schema evolution handled gracefully in production.
- Data quality systems in production. Automated checks on distributions, volumes, freshness, null rates. Not a one-off notebook.
- Data observability. Freshness monitoring, anomaly detection, lineage tracking, blast radius analysis.
- Alerting design. Threshold tuning, noise reduction, escalation paths. You've thought about false positives as much as missed detections.
- Own your infrastructure. Containers, CI/CD, deployment pipelines, monitoring, credential management. No platform team to hand off to.
- Multi-tenant or multi-client data systems. Tenant isolation, per-client configuration, and operational overhead at scale.
- APIs or service layers for data exposure. You've built interfaces that other systems consume, not just internal scripts.
- Collaborative. You'll work closely with product engineers to define data contracts and interfaces. You communicate tradeoffs clearly in writing. You document decisions, write clear specs, and communicate tradeoffs in writing.
Preferred Qualifications
- Experience with marketing or analytics data (GA4, GSC, SEO tools)
- Prior experience at a fast-moving startup
What's in It for You
- Fully remote position
- Work directly with the CTO on high-impact projects
- High ownership and autonomy. No micromanagement.
- First-hand exposure to cutting-edge AI and search technology
- Your work will directly impact well-known (10M+ ARR) companies' performance
- Join a fast-growing company at the intersection of AI and marketing
Our Hiring Process
- Application
- Take-Home Project
- Technical Deep Dive
- Leadership Interview
- Reference Checks
Pls apply here:
tinyurl [dot] com/ysk8w2eu
We are seeking talented and motivated Telesales Representatives to join our dynamic sales team. In this role, you will be responsible for proactively reaching out to potential customers, promoting our products and services, and converting leads into successful sales.
Position Summary
Cloud Engineer helps to solutionize, enable, migrate and onboard clients to a secure cloud
platform, which offload the heavy lifting for the clients so that they can focus on their own business
value creation.
Job Description
- Assessing existing customer systems and/or cloud environment to determine the best migration approach and supporting tools used
- Build a secure and compliant cloud environment, with a proven enterprise operating model, on-going cost optimisation, and day-to-day infrastructure management
- Provide and implement cloud solutions to reduce operational overhead and risk, and automates common activities, such as change requests, monitoring, patch management, security, and backup services, and provides full-lifecycle services to provision, run, and support their infrastructure
- Collaborate with internal service teams to meet the clients’ needs for their infrastructure and application deployments
- Troubleshoot complex infrastructure deployments, recreate customer issues, and build proof of concept environments that abide by cloud-best-practices & well architecture frameworks
- Apply advanced troubleshooting techniques to provide unique solutions to our customers’ individual needs
- Work on critical, highly complex customer problems that will span across multiple cloud platforms and services
- Identify and drive improvements on process and technical related issues. Act as an escalation point of contact for the clients
- Drive clients meetings & communication during reviews
Requirement:
- Degree in computer science or a similar field.
- At least 2 year of experience in the field of cloud computing.
- Experience with CI/CD systems.
- Strong in Cloud services
- Exposure to AWS/GCP and other cloud-based infrastructure platforms
- Experience with AWS configuration and management – EC2, S3, EBS, ELB, IAM, VPC, RDS, CloudFront etc
- Exposure in architecting, designing, developing, and implementing cloud solutions on AWS platforms or other cloud server such as Azure, Google Cloud
- Proficient in the use and administration of all versions of MS Windows Server
- Experience with Linux and Windows system administration and web server configuration and monitoring
- Solid programming skills in Python, Java, Perl
- Good understanding of software design principles and best practices
- Good knowledge of REST APIs
- Should have hands-on experience in any deployment orchestration tool (Jenkins, Urbancode, Bamboo, etc.)
- Experience with Docker, Kubernetes, and Helm charts
- Hands on experience in Ansible and Git repositories
- Knowledge in Maven / Gradle
- Azure, AWS, and GCP certifications are preferred.
- Troubleshooting and analytical skills.
- Good communication and collaboration skills.
- 7+ years of experience in System Administration, Networking, Automation, Monitoring
- Excellent problem solving, analytical skills and technical troubleshooting skills
- Experience managing systems deployed in public cloud platforms (Microsoft Azure, AWS or Google Cloud)
- Experience implementing and maintaining CI/CD pipelines (Jenkins, Concourse, etc.)
- Linux experience, flavours: Ubuntu, Redhat, CentOS (sysadmin, bash scripting)
- Experience setting up monitoring (Datadog, Splunk, etc.)
- Experience in Infrastructure Automation tools like Terraform
- Experience in Package Manager for Kubernetes like Helm Charts
- Experience with databases and data storage (Oracle, MongoDB, Postgres SQL, ELK stack)
- Experience with Docker
- Experience with orchestration technologies (Kubernetes or DC/OS)
- Familiar with Agile Software Development
Who should not apply for this job
- If you are looking for a 100% hike in your salary but can't commit to what value you will bring on the table
- If you never read books
- If you jump companies every 11-12 months
- If you are not comfortable working on Saturdays
- If you have less than 2 years of experience
- If you have more than 4 years of experience
At this position you will:
- Get solid experience with high-load distributed architecture using REST and High load services.
- Work with automated CI/CD processes, AWS cloud, and VoIP.
- Implement and support microservices based on Elasticsearch, Mongo, and frontend.
Key-Skills Required:
Node.js | Express | REST Api | Javascript | Redux-Saga | MongoDB | Web security TLS/SSL | Web Sockets | Promises & Call-backs | Database & Data structures | Redis | Elasticseach | React.Js
Key Deliverables:
- New feature design and implementation, Bug fixing, testing and performance tuning.
- Work on the API and Engines.
- Code deployment on cloud & maintenance of the same.
- Take complete ownership of a product/feature from setup to deployment.
- Time-bound feature delivery & updating.
- Cost-saving using efficient & effective technologies
Role and Responsibilities:
- Work on back-end (+ frontend integration) development of core scripts using NodeJS / MongoDB /Express /Redux, and Redis.
- Manage Key-value based databases like Redis.
- Active participation in the development of a sophisticated platform as one of the leading developers.
- Document code consistently throughout the development process by listing a description of the program, special instructions, and any changes made in database tables on procedural, modular, and database level.
- Respond promptly and professionally to bug reports.
- Knowledge of bluebird.js (Promises), Async, etc will be of advantage.
- Coding and programming using Object-Oriented Programming, Data Structure and Algorithms, architecture/ design and build RESTful API.
- Passionate about building products and features and build the product from scratch with thrust on Web security, TLS/SSL, web sockets, etc
- Knows one or more of Wireframes/ Prototyping/ Functional Documentation of business.
4-10 LPA based on experience and on performance in the interview round (70% Fixed - 30% Variable Incentive based on performance/delivery schedule)
(We do ZERO deductions since the salary will be paid from US/Singapore)










