50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!


AccioJob is conducting a Walk-In Hiring Drive with MakunAI Global for the position of Python Engineer.
To apply, register and select your slot here: https://go.acciojob.com/cE8XQy
Required Skills: DSA, Python, Django, Fast API
Eligibility:
- Degree: All
- Branch: All
- Graduation Year: 2022, 2023, 2024, 2025
Work Details:
- Work Location: Noida (Hybrid)
- CTC: 3.2 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Skill Centers located in Noida, Greater Noida, and Delhi
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/cE8XQy
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.
The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
- Design, build, and maintain scalable data pipelines for structured and unstructured data sources
- Develop ETL processes to collect, clean, and transform data from internal and external systems
- Support integration of data into dashboards, analytics tools, and reporting systems
- Collaborate with data analysts and software developers to improve data accessibility and performance
- Document workflows and maintain data infrastructure best practices
- Assist in identifying opportunities to automate repetitive data tasks
🚀 Why join Bound AI (OIP Insurtech)?
We build real-world AI workflows that transform insurance operations—from underwriting to policy issuance. You’ll join a fast-growing, global team of engineers and innovators tackling the toughest problems in document intelligence and agent orchestration. We move fast, ship impact, and value autonomy over bureaucracy.
🧭 What You'll Be Doing
- Design and deliver end‑to‑end AI solutions: from intake of SOVs, loss runs, and documents to deployed intelligent agent workflows.
- Collaborate closely with stakeholders (product, operations, engineering) to architect scalable ML & GenAI systems that solve real insurance challenges.
- Translate business needs into architecture diagrams, data flows, and system integrations.
- Choose and integrate components such as RAG pipelines, LLM orchestration (LangChain, DSPy), vector databases, and MLOps tooling.
- Oversee technical proof-of-concepts, pilot projects, and production rollout strategies.
- Establish governance and best practices for model lifecycle, monitoring, error handling, and versioning.
- Act as a trusted advisor and technical leader—mentor engineers and evangelize design principles across teams.
🎯 What We’re Looking For
- 6+ years of experience delivering technical solutions in machine learning, AI engineering or solution architecture.
- Proven track record leading design, deployment, and integration of GenAI-based systems (LLM tuning, RAG, multi-agent orchestration).
- Fluency with Python production code, cloud platforms (AWS, GCP, Azure), and container orchestration tools.
- Excellent communication skills—able to bridge technical strategy and business outcomes with clarity.
- Startup mindset—resourceful, proactive, and hands‑on when needed.
- Bonus: experience with insurance-specific workflows or document intelligence domains (SOVs, loss runs, ACORD forms).
🛠️ Core Skills & Tools
- Foundation models, LLM pipelines, and vector-based retrieval (embedding search, RAG).
- Architecture modeling and integration: APIs, microservices, orchestration frameworks (LangChain, Haystack, DSPy).
- MLOps: CI/CD for models, monitoring, feedback loops, and retraining pipelines.
- Data engineering: preprocessing, structured/unstructured data integration, pipelines.
- Infrastructure: Kubernetes, Docker, cloud deployment, serverless components.
📈 Why This Role Matters
As an AI Solution Architect, you’ll shape the blueprint for how AI transforms insurance workflows—aligning product strategy, operational impact, and technical scalability. You're not just writing code; you’re orchestrating systems that make labor-intensive processes smarter, faster, and more transparent.

💼 Job Title: Full Stack Developer (*Fresher/experienced*)
🏢 Company: SDS Softwares
💻 Location: Work from Home
💸 Salary range: ₹7,000 - ₹18,000 per month (based on knowledge and interview)
🕛 Shift Timings: 12 PM to 9 PM
About the role: As a Full Stack Developer, you will work on both the front-end and back-end of web applications. You will be responsible for developing user-friendly interfaces and maintaining the overall functionality of our projects.
⚜️ Key Responsibilities:
- Collaborate with cross-functional teams to define, design, and ship new features.
- Develop and maintain high-quality web applications (frontend + backend )
- Troubleshoot and debug applications to ensure peak performance.
- Participate in code reviews and contribute to the team’s knowledge base.
⚜️ Required Skills:
- Proficiency in HTML, CSS, JavaScript, React.js for front-end development. ✅
- Understanding of server-side languages such as Node.js, Python, or PHP. ✅
- Familiarity with database technologies such as MySQL, MongoDB, or ✅ PostgreSQL.
- Basic knowledge of version control systems, particularly Git.
- Strong problem-solving skills and attention to detail.
- Excellent communication skills and a team-oriented mindset.
💠 Qualifications:
- Recent graduates or individuals with internship experience (6 months to 1.5years) in software development.
- Must have a personal laptop and stable internet connection.
- Ability to join immediately is preferred.
If you are passionate about coding and eager to learn, we would love to hear from you. 👍


Research Intern
Position Overview
We are seeking a motivated Research Intern to join our AI research team, focusing on Text-to-Speech (TTS) and Automatic Speech Recognition (ASR) technologies. The intern will play a crucial role in evaluating our proprietary models against industry benchmarks, analyzing competitive voice agent platforms, and contributing to cutting-edge research in speech AI technologies.
Key Responsibilities
Model Evaluation & Benchmarking
- Conduct comprehensive evaluation of our TTS and ASR models against existing state-of-the-art models
- Design and implement evaluation metrics and frameworks for speech quality assessment
- Perform comparative analysis of model performance across different datasets and use cases
- Generate detailed reports on model strengths, weaknesses, and improvement opportunities
Competitive Analysis
- Evaluate and compare our voice agent platform with existing solutions (Vapi, Bland AI, and other competitors)
- Analyze feature sets, performance metrics, and user experience across different voice agent platforms
- Conduct technical deep-dives into competitive architectures and methodologies
- Provide strategic recommendations based on competitive landscape analysis
Research & Innovation
- Monitor and analyze emerging trends in ASR, TTS, and voice AI technologies
- Research novel approaches to improve ASR and TTS model performance
- Investigate new architectures, training techniques, and optimization methods
- Stay current with academic literature and industry developments in speech AI
Model Development & Training
- Assist in training TTS and ASR models on various datasets
- Implement and experiment with different model architectures and configurations
- Perform model fine-tuning for specific use cases and domains
- Optimize models for different deployment scenarios (edge, cloud, real-time)
- Conduct data preprocessing and augmentation for training datasets
Documentation & Reporting
- Maintain detailed documentation of experiments, methodologies, and results
- Prepare technical reports and presentations for internal stakeholders
- Contribute to research publications and technical blog posts
- Create visualization and analysis tools for model performance tracking
Required Qualifications
Education & Experience
- Currently pursuing or recently completed Bachelor's/Master's degree in Computer Science, Electrical Engineering, Machine Learning, or related field
- Strong academic background in machine learning, deep learning, and signal processing
- Previous experience with speech processing, NLP, or audio ML projects (academic or professional)
Technical Skills
- Programming Languages: Proficiency in Python; experience with PyTorch, TensorFlow
- Speech AI Frameworks: Experience with libraries like librosa, torchaudio, speechbrain, or similar
- Machine Learning: Strong understanding of deep learning architectures, training procedures, and evaluation methods
- Data Processing: Experience with audio data preprocessing, feature extraction, and dataset management
- Tools & Platforms: Familiarity with Colab or Jupyter notebooks, Git, Docker, and cloud platforms (AWS/GCP/Azure)
Preferred Qualifications
- Experience with transformer architectures, attention mechanisms, and sequence-to-sequence models
- Knowledge of speech synthesis techniques (WaveNet, Tacotron, FastSpeech, etc.)
- Understanding of ASR architectures (Wav2Vec, Whisper, Conformer, etc.)
- Experience with model optimization techniques (quantization, pruning, distillation)
- Familiarity with MLOps tools and model deployment pipelines
- Previous work with voice AI applications or conversational AI systems
Skills & Competencies
Technical Competencies
- Strong analytical and problem-solving abilities
- Ability to design and conduct rigorous experiments
- Experience with statistical analysis and performance metrics
- Understanding of audio signal processing fundamentals
- Knowledge of distributed training and large-scale model development
Soft Skills
- Excellent written and verbal communication skills
- Ability to work independently and manage multiple projects
- Strong attention to detail and commitment to reproducible research
- Collaborative mindset and ability to work in cross-functional teams
- Curiosity and passion for staying current with AI research trends
What You'll Gain
Learning Opportunities
- Hands-on experience with state-of-the-art speech AI technologies
- Exposure to full model development lifecycle from research to deployment
- Mentorship from experienced AI researchers and engineers
- Opportunity to contribute to cutting-edge research projects
Professional Development
- Experience with industry-standard tools and methodologies
- Opportunity to present research findings to technical and business stakeholders
- Potential for research publication and conference presentations
- Networking opportunities within the AI research community
Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks

Title – Python Developer (Healthcare RCM Automation)
· Department - Technology
· Shift - Night
· Location - Noida
· Education - Graduation
· Experience – Minimum 3-7 years of professional experience with below skills.
· Interpersonal Skills – Good Communication skills, positive attitude and should be confident.
Python Developer is responsible for building, debugging, and implement application projects using the Python programming language and developing program specifications and coded modules according to specifications and client standards.
Responsibilities
· Advanced Python Programming: Extensive experience in Python, with a deep understanding of Python principles, design patterns, and best practices. Proficiency in developing scalable and efficient Python code, with a focus on automation, data processing, and backend services. Demonstrated ability with automation libraries like PyAutoGUI for GUI automation tasks, enabling the automation of mouse and keyboard actions.
· Experience with Selenium for web automation: Capable of automating web browsers to mimic user actions, scrape web data, and test web applications.
· Python Frameworks and Libraries: Strong experience with popular Python frameworks and libraries relevant to data processing, web application development, and automation, such as Flask or Django for web development, Pandas and NumPy for data manipulation, and Celery for task scheduling.
· SQL Server Expertise: Advanced knowledge of SQL Server management and development.
· API Development and Integration: Experience in developing and consuming APIs. Understanding of API security best practices. Familiarity with integrating third-party services and APIs into the existing ecosystem.
· Version Control and CI/CD: Proficiency in using version control systems, such as Git. Experience with continuous integration and continuous deployment (CI/CD) pipelines
· Unit Testing and Debugging: Strong understanding of testing practices, including unit testing, integration testing. Experience with Python testing. Skilled in debugging and performance profiling.
· Containerization and Virtualization: Familiarity with containerization and orchestration tools, such as Docker and Kubernetes, to enhance application deployment and scalability.
Requirements & Skills
· Analytical Thinking: Ability to analyze complex problems and break them down into manageable parts. Strong logical reasoning and troubleshooting skills.
· Communication: Excellent verbal and written communication skills. Ability to effectively articulate technical challenges and solutions to both technical and non-technical team members.
· Team Collaboration: Experience working in agile development environments. Ability to work
collaboratively in cross-functional teams and with stakeholders from different backgrounds.
· Continuous Learning: A strong desire to learn new technologies and frameworks. Keeping up to date with industry trends and advancements in healthcare RCM, AI, and automation technologies.
Additional Preferred Skills:
· Industry-Specific Knowledge is a plus:
· Familiarity with healthcare industry standards and regulations, such as HIPAA, is highly advantageous.
· Understanding of healthcare revenue cycle management processes and challenges. Experience with healthcare data formats and standards (e.g., HL7, FHIR) is beneficial.
Educational Qualifications:
· Bachelor’s degree in a related field.,
· Relevant technical certifications are a plus


Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
Requirements
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong troubleshooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP, and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Dockers and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.


About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 4+ years of hands-on software development experience, particularly in Python and ReactJs at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 3+ years of Object-Oriented Programming with Python or equivalent
- 3+ years of experience working with relational (SQL) databases
- 3+ years of experience using Git to contribute code as part of a team of Software Craftspeople
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Ontrac Solutions is a leading technology consulting firm specializing in cutting-edge solutions that drive business transformation. We partner with organizations to modernize their infrastructure, streamline processes, and deliver tangible results.
Our client is actively seeking a Conversational AI Engineer with deep hands-on experience in Google Contact Center AI (CCAI) to join a high-impact digital transformation project via a GCP Premier Partner. As part of a staff augmentation model, you will be embedded within the client’s technology or contact center innovation team, delivering scalable virtual agent solutions that improve customer experience, agent productivity, and call deflection.
Key Responsibilities:
- Lead the design and buildout of Dialogflow CX/ES agents across chat and voice channels.
- Integrate virtual agents with client systems and platforms (e.g., Genesys, Twilio, NICE CXone, Salesforce).
- Develop fulfillment logic using Google Cloud Functions, Cloud Run, and backend integrations (via REST APIs and webhooks).
- Collaborate with stakeholders to define intent models, entity schemas, and user flows.
- Implement Agent Assist and CCAI Insights to augment live agent productivity.
- Leverage Google Cloud tools including Pub/Sub, Logging, and BigQuery to support and monitor the solution.
- Support tuning, regression testing, and enhancement of NLP performance using live utterance data.
- Ensure adherence to enterprise security and compliance requirements.
Required Skills & Qualifications:
- 3+ years developing conversational AI experiences, including at least 1–2 years with Google Dialogflow CX or ES.
- Solid experience across GCP services (Functions, Cloud Run, IAM, BigQuery, etc.).
- Strong skills in Python or Node.js for webhook fulfillment and backend logic.
- Knowledge of NLU/NLP modeling, training, and performance tuning.
- Prior experience working in client-facing or embedded roles via consulting or staff augmentation.
- Ability to communicate effectively with technical and business stakeholders.
Nice to Have:
- Hands-on experience with Agent Assist, CCAI Insights, or third-party CCaaS tools (Genesys, Twilio Flex, NICE CXone).
- Familiarity with Vertex AI, AutoML, or other GCP ML services.
- Experience in regulated industries (healthcare, finance, insurance, etc.).
- Google Cloud certification in Cloud Developer or CCAI Engineer.


Ops Analysts/Sys Admin
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Ops Analysts/Sys Admin
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
External Job Title :
Systems Engineer 1
Position Responsibilities :
We are seeking a highly skilled and motivated System Engineer to join our team. Apart from a strong technical background, excellent problem-solving abilities, and a collaborative mindset the ideal candidate will be a self-starter with a high level of initiative and a passion for experimentation. This role requires someone who thrives in a fast-paced environment and is eager to take on new challenges.
- Technical Skills:
Must Have Skills:
- PHP
- SQL; Relational Database Concepts
- At least one scripting language (Python, PowerShell, Bash, UNIX, etc.)
- Experience with Learning and Utilizing APIs
Nice to Have Skills:
- Experience with AI Initiatives & exposure of GenAI and/or Agentic AI projects
- Microsoft Power Apps
- Microsoft Power BI
- Atomic
- Snowflake
- Cloud-Based Application Development
- Gainsight
- Salesforce
- Soft Skills:
Must Have Skills:
- Flexible Mindset for Solution Development
- Independent and Self-Driven; Autonomous
- Investigative; drives toward resolving Root Cause of Stakeholder needs instead of treating Symptoms; Critical Thinker
- Collaborative mindset to drive best results
Nice to Have Skills:
- Business Acumen (Very Nice to Have)
- Responsibilities:
- Develop and maintain system solutions to meet stakeholder needs.
- Collaborate with team members and stakeholders to ensure effective communication and teamwork.
- Independently drive projects and tasks to completion with minimal supervision.
- Investigate and resolve root causes of issues, applying critical thinking to develop effective solutions.
- Adapt to changing requirements and maintain a flexible approach to solution development.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 2-3 years of experience programming skills on PHP, Powe BI or Snowflake, Python and API Integration.
- Proven experience in system engineering or a related field.
- Strong technical skills in the required areas.
- Excellent problem-solving and critical thinking abilities.
- Ability to work independently and as part of a team.
- Strong communication and collaboration skills.

Greentree Capital is seeking an enthusiastic Software Network Engineering Intern to join our growing team. This internship will focus on integrating artificial intelligence and ChatGPT functionalities into our AI-powered chatbot systems across various investment project websites. The ideal candidate will have a passion for artificial intelligence and software development, with a strong foundation in programming and problem-solving.
Responsibilities:
- Collaborate with the development team to design and implement AI-driven chatbot functionalities for our investment project websites.
- Assist in the integration of ChatGPT into existing systems to enhance user interaction and experience using artificial intelligence technologies.
- Conduct research on artificial intelligence technologies, natural language processing (NLP), and recommend best practices for chatbot development.
- Test and debug AI-driven chatbot functionalities to ensure optimal performance and user satisfaction.
- Gather and analyze user feedback to improve AI-driven chatbot responses and capabilities.
- Document technical specifications and processes for the development of AI-enhanced chatbot features.
Qualifications
- Strong foundation in programming languages such as Python, Java, or JavaScript.
- Familiarity with AI concepts, machine learning, and natural language processing (NLP) technologies.
- Experience with chatbot development and/or artificial intelligence frameworks is a plus.
- Ability to work in a fast-paced environment, be self-motivated, organized, and detail-oriented.
- Excellent communication skills, with the ability to work collaboratively within a team.
- A keen interest in learning about new artificial intelligence technologies and business processes.
Education & Experience
- Currently pursuing a Bachelor’s degree in Computer Science, Software Engineering, or a related field.
- Previous experience in software development or AI-related projects is preferred but not required.
Greentree’s website: www.greentree.group

We are a forward-thinking company Hookux seeking a skilled Full Stack Developer to join our team. You will work on a variety of exciting projects that require problem-solving, innovation, and scalability. One such project is, a stock market and crypto investing simulation platform that teaches children financial skills through gamified competition.
Key Responsibilities:
- Develop and maintain robust, scalable, and efficient front-end and back-end systems.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Design and implement API endpoints and server-side logic.
- Work closely with the design and product teams to ensure the technical feasibility of UI/UX designs.
- Optimize the application for maximum speed and scalability.
- Write well-documented, clean code.
- Troubleshoot and debug applications.
- Stay up-to-date with emerging technologies and industry trends.
Technical Skills & Experience:
- Proficient in JavaScript/TypeScript, with expertise in React.js for front-end development.
- Strong experience with Node.js, Express.js, or other backend technologies.
- Familiarity with database technologies such as MongoDB, PostgreSQL, or MySQL.
- Experience with RESTful APIs and third-party integrations.
- Knowledge of cloud platforms like AWS, Azure, or Google Cloud.
- Proficient in version control (e.g., Git) and collaboration tools.
- Experience with agile methodologies and continuous integration/deployment (CI/CD).
Bonus Skills:
- Experience with React Native for mobile app development.
- Familiarity with blockchain technology or cryptocurrency-related platforms.
- Experience with containerization (e.g., Docker, Kubernetes).
- Knowledge of testing frameworks and tools.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- years of experience in full stack development.
- Ability to manage multiple priorities and work independently as well as in a team environment.
Benefits:
- Competitive salary and performance bonuses.
- Opportunities for career growth and learning.
- Flexible working hours and remote working options.
What We Offer:
- Competitive salary or hourly rate
- Flexible working hours
- Opportunity to work on impactful, real-world projects
- Creative and supportive team environment
mail me your CV and portfolio at hr @ hookux.com



Remote Job Opportunity
Job Title: Data Scientist
Contract Duration: 6 months+
Location: offshore India
Work Time: 3 pm to 12 am
Must have 4+ Years of relevant experience.
Job Summary:
We are seeking an AI Data Scientist with a strong foundation in machine learning, deep learning, and statistical modeling to design, develop, and deploy cutting-edge AI solutions.
The ideal candidate will have expertise in building and optimizing AI models, with a deep understanding of both statistical theory and modern AI techniques. You will work on high-impact projects, from prototyping to production, collaborating with engineers, researchers, and business stakeholders to solve complex problems using AI.
Key Responsibilities:
Research, design, and implement machine learning and deep learning models for predictive and generative AI applications.
Apply advanced statistical methods to improve model robustness and interpretability.
Optimize model performance through hyperparameter tuning, feature engineering, and ensemble techniques.
Perform large-scale data analysis to identify patterns, biases, and opportunities for AI-driven automation.
Work closely with ML engineers to validate, train, and deploy the models.
Stay updated with the latest research and developments in AI and machine learning to ensure innovative and cutting-edge solutions.
Qualifications & Skills:
Education: PhD or Master's degree in Statistics, Mathematics, Computer Science, or a related field.
Experience:
4+ years of experience in machine learning and deep learning, with expertise in algorithm development and optimization.
Proficiency in SQL, Python and visualization tools ( Power BI).
Experience in developing mathematical models for business applications, preferably in finance, trading, image-based AI, biomedical modeling, or recommender systems industries
Strong communication skills to interact effectively with both technical and non-technical stakeholders.
Excellent problem-solving skills with the ability to work independently and as part of a team.



Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis, and hence having good communication skills is a must.
**Requirements**
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 1+ years of experience developing applications on Django, Angular/React, HTML, and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong Troubleshooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP, and Azure
3. Big Data Technologies: Hadoop and Spark is a plus
4. Containerized Deployment: Docker and Kubernetes are a plus.
5. Other: Understanding of Golang is a plus.

About the Role:
We are looking for a highly skilled Data Engineer with a strong foundation in Power BI, SQL, Python, and Big Data ecosystems to help design, build, and optimize end-to-end data solutions. The ideal candidate is passionate about solving complex data problems, transforming raw data into actionable insights, and contributing to data-driven decision-making across the organization.
Key Responsibilities:
- Data Modelling & Visualization
- Build scalable and high-quality data models in Power BI using best practices.
- Define relationships, hierarchies, and measures to support effective storytelling.
- Ensure dashboards meet standards in accuracy, visualization principles, and timelines.
- Data Transformation & ETL
- Perform advanced data transformation using Power Query (M Language) beyond UI-based steps.
- Design and optimize ETL pipelines using SQL, Python, and Big Data tools.
- Manage and process large-scale datasets from various sources and formats.
- Business Problem Translation
- Collaborate with cross-functional teams to translate complex business problems into scalable, data-centric solutions.
- Decompose business questions into testable hypotheses and identify relevant datasets for validation.
- Performance & Troubleshooting
- Continuously optimize performance of dashboards and pipelines for latency, reliability, and scalability.
- Troubleshoot and resolve issues related to data access, quality, security, and latency, adhering to SLAs.
- Analytical Storytelling
- Apply analytical thinking to design insightful dashboards—prioritizing clarity and usability over aesthetics.
- Develop data narratives that drive business impact.
- Solution Design
- Deliver wireframes, POCs, and final solutions aligned with business requirements and technical feasibility.
Required Skills & Experience:
- Minimum 3+ years of experience as a Data Engineer or in a similar data-focused role.
- Strong expertise in Power BI: data modeling, DAX, Power Query (M Language), and visualization best practices.
- Hands-on with Python and SQL for data analysis, automation, and backend data transformation.
- Deep understanding of data storytelling, visual best practices, and dashboard performance tuning.
- Familiarity with DAX Studio and Tabular Editor.
- Experience in handling high-volume data in production environments.
Preferred (Good to Have):
- Exposure to Big Data technologies such as:
- PySpark
- Hadoop
- Hive / HDFS
- Spark Streaming (optional but preferred)
Why Join Us?
- Work with a team that's passionate about data innovation.
- Exposure to modern data stack and tools.
- Flat structure and collaborative culture.
- Opportunity to influence data strategy and architecture decisions.

Founding Full Stack Engineer
(Frontend & UX Focus)
Location: Remote (India preferred) | Type: Full-time | Compensation: Competitive salary + early-stage stock options
🧠 About Alpha
Modern revenue teams juggle 10+ point-solutions. Alpha unifies them into an agent-powered platform that plans, executes, and optimises GTM campaigns—so every touch happens on the right channel, at the right time, with the right context.
Alpha is building the world’s most intuitive AI stack for revenue teams —to engage, convert & scale revenue with an AI powered GTM team. l
Our mission is to make AI not just accessible, but dependable and truly useful.
We’re early, funded, and building with urgency. Join us to help define what work looks like when AI works for you.
🔧 What You’ll Do
You’ll lead the development of our AI GTM platform and underlying AI agents to power seamless multi-channel GTMs.
This is a hybrid UX-engineering role: you’ll translate high-level user journeys into interfaces that feel clear, powerful, and trustworthy.
Your responsibilities:
- Design & implement end-to-end features across React-TS/Next.js, Node.js, Postgres, Redis, and Python micro-services for LLM agents.
- Build & document scalable GraphQL / REST APIs that expose our data model (Company, Person, Campaign, Sequence, Asset, ActivityRecord, InferenceSnippet).
- Integrate third-party APIs (CRM, email, ads, CMS) and maintain data sync reliability > 98 %.
- Implement the dynamic agent flow builder with configurable steps, HITL checkpoints, and audit trails.
- Instrument product analytics, error tracking, and CI pipelines for fast feedback and safe releases.
- Work directly with the founder on product scoping, technical roadmap, and hiring pipeline.
✅ What We’re Looking For
- 1–3 years experience building polished web apps (React, Vue, or similar)
- Strong eye for design fidelity, UX decisions, and motion
- Experience integrating frontend with backend APIs and managing state
- Experience with visual builders, workflow editors, or schema UIs is a big plus
- You love taking complex systems and making them feel simple
💎 What You’ll Get
- Competitive salary + high-leverage early equity
- Ownership of user experience at the most critical phase
- A tight feedback loop with real users from Day 1
- Freedom to shape UI decisions, patterns, and performance for the long haul


Software Development Intern
About This Role
We're building next-generation browser agents that combine accuracy, security, and advanced task learning capabilities. We're looking for self-driven, independent interns who thrive on exploration and problem-solving to help us push the boundaries of what's possible with intelligent web automation.
This isn't a traditional learning internship—we want builders who have already proven they can ship projects and tackle challenges autonomously. You'll work across our full tech stack, from backend APIs to frontend interfaces, with access to cutting-edge AI-powered development tools while contributing to the future of browser automation.
What You'll Do
- Develop intelligent browser agents with advanced task learning and execution capabilities
- Build secure automation systems that can navigate complex web environments accurately
- Create robust AI-powered workflows using LangChain and modern ML frameworks
- Design and implement security measures for safe browser automation
- Create comprehensive test environments for agent validation and performance testing
- Debug and fix application bugs across the full stack to ensure reliable agent operation
- Solve complex problems independently using AI code assistants (Cursor, v0.dev, etc.)
- Explore and experiment with new technologies in AI agent development
- Own projects end-to-end from conception to deployment
- Work across the full stack as needed—no rigid role boundaries
Our Tech Stack
Backend:
- Python with FastAPI
- LangChain for AI/ML workflows
- Google Cloud Platform (GCP) services
- Supabase for database and authentication
Frontend:
- JavaScript/TypeScript
- React for web interfaces
- Electron for desktop applications
Development Tools:
- Cursor IDE with AI assistance
- v0.dev for rapid prototyping
- Modern DevOps and CI/CD pipelines
Flexibility:
- Choose your own tech stack when needed - We're open to new tools and frameworks that solve problems better
- Experiment with cutting-edge technologies - If you find a better solution, we're all ears
What We're Looking For
Required Experience
- Proven project portfolio - Show us what you've built, not what you've learned
- Full-stack development experience with Python and JavaScript
- Independent problem-solving skills - You research, experiment, and find solutions
- Experience with modern frameworks (FastAPI, React, or similar)
- Cloud platform familiarity (GCP, AWS, or Azure)
Ideal Candidates Have
- Built and deployed real applications (personal projects, hackathons, open source)
- Experience with browser automation (Selenium, Playwright, Puppeteer, or similar)
- AI/ML model integration experience (LangChain, OpenAI APIs, agent frameworks)
- Security-focused development understanding of web security principles
- Task learning and reinforcement learning familiarity
- Testing and debugging experience with automated systems and complex applications
- Test environment setup and CI/CD pipeline experience
- Database design and optimization experience
- Desktop application development (Electron or similar)
- DevOps and infrastructure automation knowledge
What We Offer
- Work on cutting-edge browser agent technology - Shape the future of intelligent web automation
- Cutting-edge AI development tools - Full access to Cursor, v0.dev, and other AI assistants
- Technology freedom - Choose the best tools for the job, not just what's already in the stack
- Real project ownership - Your work will directly impact our next-gen browser agents
- Flexible exploration time - Dedicate time to experiment with new AI/ML approaches
- Mentorship from experienced developers - When you need it, not constant hand-holding
- Remote-first environment with flexible working hours
- Competitive internship compensation
What Makes You Stand Out
- Self-starter mentality - You don't wait for detailed instructions
- Curiosity-driven exploration - You love diving into new technologies
- Problem-solving resilience - You debug, research, and iterate until it works
- Quality-focused delivery - You ship polished, well-tested code
- Open source contributions or active GitHub presence
- Technology adaptability - You can evaluate and adopt new tools when they solve problems better
Application Requirements
- Portfolio/GitHub - Show us your best projects with live demos
- Brief cover letter - Tell us about a challenging problem you solved independently
- Technical challenge - We'll provide a small project to assess your problem-solving approach
Not a Good Fit If You
- Need constant guidance and structured learning paths
- Prefer working on assigned tasks without creative input
- Haven't built substantial projects outside of coursework
- Are looking primarily for resume building rather than real contribution
Ready to build something amazing? Send us your portfolio and let's see what you can create with unlimited access to AI development tools and real-world challenges.
We're an equal opportunity employer committed to diversity and inclusion.

About Us
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace. Please explore more at https://www.cloudsufi.com/.
Position Overview:
Seeking an experienced Data Engineer to design, develop, and productionize graph database solutions using Neo4j for economic data analysis and modeling. This role requires expertise in graph database architecture, data pipeline development, and production system deployment.
Key Responsibilities
Graph Database Development
- Design and implement Neo4j graph database schemas for complex economic datasets
- Develop efficient graph data models representing economic relationships, transactions, and market dynamics
- Create and optimize Cypher queries for complex analytical workloads
- Build graph-based data pipelines for real-time and batch processing
Data Engineering & Pipeline Development
- Architect scalable data ingestion frameworks for structured and unstructured economic data
- Develop ETL/ELT processes to transform relational and time-series data into graph formats
- Implement data validation, quality checks, and monitoring systems
- Build APIs and services for graph data access and manipulation
Production Systems & Operations
- Deploy and maintain Neo4j clusters in production environments
- Implement backup, disaster recovery, and high availability solutions
- Monitor database performance, optimize queries, and manage capacity planning
- Establish CI/CD pipelines for graph database deployments
Economic Data Specialization
- Model financial market relationships, economic indicators, and trading networks
- Create graph representations of supply chains, market structures, and economic flows
- Develop graph analytics for fraud detection, risk assessment, and market analysis
- Collaborate with economists and analysts to translate business requirements into graph solutions
Required Qualifications
Technical Skills:
- **Neo4j Expertise**: 3+ years hands-on experience with Neo4j database development
- **Graph Modeling**: Strong understanding of graph theory and data modeling principles
- **Cypher Query Language**: Advanced proficiency in writing complex Cypher queries
- **Programming**: Python, Java, or Scala for data processing and application development
- **Data Pipeline Tools**: Experience with Apache Kafka, Apache Spark, or similar frameworks
- **Cloud Platforms**: AWS, GCP, or Azure with containerization (Docker, Kubernetes)
Database & Infrastructure
- Experience with graph database administration and performance tuning
- Knowledge of distributed systems and database clustering
- Understanding of data warehousing concepts and dimensional modeling
- Familiarity with other databases (PostgreSQL, MongoDB, Elasticsearch)
Economic Data Experience
- Experience working with financial datasets, market data, or economic indicators
- Understanding of financial data structures and regulatory requirements
- Knowledge of data governance and compliance in financial services
Preferred Qualifications
- **Neo4j Certification**: Neo4j Certified Professional or Graph Data Science certification
- **Advanced Degree**: Master's in Computer Science, Economics, or related field
- **Industry Experience**: 5+ years in financial services, fintech, or economic research
- **Additional Skills**: Machine learning on graphs, network analysis, time-series analysis
Technical Environment
- Neo4j Enterprise Edition with APOC procedures
- Apache Kafka for streaming data ingestion
- Apache Spark for large-scale data processing
- Docker and Kubernetes for containerized deployments
- Git, Jenkins/GitLab CI for version control and deployment
- Monitoring tools: Prometheus, Grafana, ELK stack
Application Requirements
- Portfolio demonstrating Neo4j graph database projects
- Examples of production graph systems you've built
- Experience with economic or financial data modeling preferred

Data Science Analyst – Remote
Springer Capital is a real estate investment firm based in Chicago, Shanghai, and Hong Kong. Springer engages in Capital Advisory for APAC Private Equity and Asset Management making financial investments in real estate and other sectors in US markets.
Springer seeks an Data Science Analyst to join the Technology side of the company. The internship can be onsite in Shanghai or conducted remotely. The start date of the internship is flexible.
Job Highlights
As an intern for the data analysis team, you will be focusing on researching and developing tools and workflow that automate some parts of our business automation. As business automation is important throughout the firm, you will have the opportunity to collaborate with teams across Springer.
What you will do as an intern:
Collect data from various sources, including databases, APIs, and web scraping tools.
Clean and process raw data to ensure it is accurate and consistent.
Analyze data to extract insights using computational tools, such as Excel, SQL, and Python.
Communicate insights in a clear and concise manner with the manager along your progress.
Implement solutions based on insights you discovered to improve business processes or solve problems.
Our commitment to your development:
Overarching and detailed training materials before interns hit the desk
Interns will have group calls with the director and supervisors regularly for up-to-date and constructive feedback
Greater leadership and responsibilities will be given to interns based on work quality
Who we are looking for:
Strong experience using Excel.
Passionate about analyzing data.
Experience (preferred) with data analysis.
Research and problem-solving abilities
About Springer:
Springer Capital focuses on raising capital and solving capital issues in the real estate private equity market. We have experience raising capital for clients across the entire capital spectrum. Client relationships are a top priority for Springer. We establish long-term relationships with investors and lenders as well as have active dialogues with private equities, family offices, pension funds, infrastructure funds, and independent sponsors across Asia. With Springer’s expertise in the housing market, we ensure all parties are aligned in land acquisitions, development, improvements, sales, and lease-up.
Technical and legal Information
The internship enrollment period is flexible. The expected hours are 20 per week for ~3 months. Upon completion, you will receive an internship confirmation letter, or you can apply to your school for internship credit.


Build a dynamic solution empowering companies to optimize promotional activities for maximum impact. It collects and validates data, analyzes promotion effectiveness, plans calendars, and integrates seamlessly with existing systems. The tool enhances vendor collaboration, negotiates better deals, and employs machine learning to optimize promotional plans, enabling companies to make informed decisions and maximize return on investment.
Technology Stack: Scala, Go-Lang, Docker, Kubernetes, Databricks, Python optional.
Working Time Zone: EU
Specialty: Data Science
Level of the Candidate: more then 5 years of experience
Language Proficiency: English Upper-Intermediate
Required Soft Skills:
- Prefer problem solving style over experience
- Ability to clarify requirements with the customer
- Willingness to pair with other engineers when solving complex issues
- Good communication skills
Hard Skills / Need to Have:
- Experience in Scala and/or Go, designing and building scalable high-performing applications
- Experience in containerization and microservices orchestration using Docker and Kubernetes
- Experience in building data pipelines and ETL solutions using Databricks
- Experience in data storage and retrieval with PostgreSQL and Elasticsearch
- Experience in deploying and maintaining solutions in the Azure cloud environment
- Experience in Python is nice to have
Responsibilities and Tasks:
- Develop and maintain distributed systems using Scala and/or Go
- Work with Docker and Kubernetes for containerization and microservices orchestration
- Build data pipelines and ETL solutions using Databricks
- Work with PostgreSQL and Elasticsearch for data storage and retrieval
- Deploy and maintain solutions in the Azure cloud environment

Snowflake Data Engineer
Job Description:
· Overall Experience : 5+ years of experience in Snowflake and Python.
· Experience of 5+ years in data preparation. BI projects to understand business requirements in BI context and understand data model to transform raw data into meaningful data using snowflake and Python.
· Designing and creating data models that define the structure and relationships of various data elements within the organization. This includes conceptual, logical, and physical data models, which help ensure data accuracy, consistency, and integrity.
· Designing data integration solutions that allow different systems and applications to share and exchange data seamlessly. This may involve selecting appropriate integration technologies, developing ETL (Extract, Transform, Load) processes, and ensuring data quality during the integration process.
· Create and maintain optimal data pipeline architecture.
· Good knowledge of cloud platforms like AWS/Azure/GCP
· Good hands-on knowledge of Snowflake is a must. Experience with various data ingestion methods (Snow pipe & others), time travel and data sharing and other Snowflake capabilities
· Good knowledge of Python/Py Spark, advanced features of Python
· Support business development efforts (proposals and client presentations).
· Ability to thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success.
· Excellent leadership and interpersonal skills.
· Eager to contribute to a team-oriented environment.
· Strong prioritization and multi-tasking skills with a track record of meeting deadlines.
· Ability to be creative and analytical in a problem-solving environment.
· Effective verbal and written communication skills.
· Adaptable to new environments, people, technologies, and processes
· Ability to manage ambiguity and solve undefined problems.

At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking 4 DevOps Support Engineer to join one of our clients' teams in India who can start until 20th of July. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
Job requirements
Key Responsibilities:
- Monitor and troubleshoot AWS and/or Azure environments to ensure optimal performance and availability.
- Respond promptly to incidents and alerts, investigating and resolving issues efficiently.
- Perform basic scripting and automation tasks to streamline cloud operations (e.g., Bash, Python).
- Communicate clearly and fluently in English with customers and internal teams.
- Collaborate closely with the Team Lead, following Standard Operating Procedures (SOPs) and escalation workflows.
- Work in a rotating shift schedule, including weekends and nights, ensuring continuous support coverage.
Shift Details:
- Engineers rotate shifts, typically working 4–5 shifts per week.
- Each engineer works about 4 to 5 shifts per week, rotating through morning, evening, and night shifts—including weekends—to cover 24/7 support evenly among the team
- Rotation ensures no single engineer is always working nights or weekends; the load is shared fairly among the team.
Qualifications:
- 2–5 years of experience in DevOps or cloud support roles.
- Strong familiarity with AWS and/or Azure cloud environments.
- Experience with CI/CD tools such as GitHub Actions or Jenkins.
- Proficiency with monitoring tools like Datadog, CloudWatch, or similar.
- Basic scripting skills in Bash, Python, or comparable languages.
- Excellent communication skills in English.
- Comfortable and willing to work in a shift-based support role, including night and weekend shifts.
- Prior experience in a shift-based support environment is preferred.
What We Offer:
- Remote work opportunity — work from anywhere in India with a stable internet connection.
- Comprehensive training program including:
- Shadowing existing processes to gain hands-on experience.
- Learning internal tools, Standard Operating Procedures (SOPs), ticketing systems, and escalation paths to ensure smooth onboarding and ongoing success.
Job Title: Senior/Lead Performance Test Engineer (JMeter Specialist)
Experience: 5-10 Years
Location: Remote / Pune, India
Job Summary:
We are looking for a highly skilled and experienced Senior/Lead Performance Test Engineer with a strong background in Apache JMeter to lead and execute performance testing initiatives for our web and mobile applications. The ideal candidate will be a hands-on expert in designing, scripting, executing, and analyzing complex performance tests, identifying bottlenecks, and collaborating with cross-functional teams to optimize system performance. This role is critical in ensuring our applications deliver exceptional user experiences under various load conditions.
Key Responsibilities:
Performance Test Strategy & Planning:
Define, develop, and implement comprehensive performance test strategies and plans aligned with project requirements and business objectives for web and mobile applications.
Collaborate with product owners, developers, architects, and operations teams to understand non-functional requirements (NFRs) and service level agreements (SLAs).
Determine appropriate performance test types (Load, Stress, Endurance, Spike, Scalability) and define relevant performance metrics and acceptance criteria.
Scripting & Test Development (JMeter Focus):
Design, develop, and maintain robust and scalable performance test scripts using Apache JMeter for various protocols (HTTP/S, REST, SOAP, JDBC, etc.).
Implement advanced JMeter features including correlation, parameterization, assertions, custom listeners, and logic controllers to simulate realistic user behavior.
Develop modular and reusable test assets.
Integrate performance test scripts into CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps) for continuous performance monitoring.
Test Execution & Monitoring:
Set up and configure performance test environments, ensuring they accurately mimic production infrastructure (including cloud environments like AWS, Azure, GCP).
Execute performance tests in various environments, managing large-scale load generation using JMeter (standalone or distributed mode).
Monitor system resources (CPU, Memory, Disk I/O, Network) and application performance metrics using various tools (e.g., Grafana, Prometheus, ELK stack, AppDynamics, Dynatrace, New Relic) during test execution.
Analysis & Reporting:
Analyze complex performance test results, identify performance bottlenecks, and pinpoint root causes across application, database, and infrastructure layers.
Interpret monitoring data, logs, and profiling reports to provide actionable insights and recommendations for performance improvements.
Prepare clear, concise, and comprehensive performance test reports, presenting findings, risks, and optimization recommendations to technical and non-technical stakeholders.
Collaboration & Mentorship:
Work closely with development and DevOps teams to troubleshoot, optimize, and resolve performance issues.
Act as a subject matter expert in performance testing, providing technical guidance and mentoring to junior team members.
Contribute to the continuous improvement of performance testing processes, tools, and best practices.
Required Skills & Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field.
5-10 years of hands-on experience in performance testing, with a strong focus on web and mobile applications.
Expert-level proficiency with Apache JMeter for scripting, execution, and analysis.
Strong understanding of performance testing methodologies, concepts (e.g., throughput, response time, latency, concurrency), and lifecycle.
Experience with performance monitoring tools such as Grafana, Prometheus, CloudWatch, Azure Monitor, GCP Monitoring, AppDynamics, Dynatrace, or New Relic.
Solid understanding of web technologies (HTTP/S, REST APIs, WebSockets, HTML, CSS, JavaScript) and modern application architectures (Microservices, Serverless).
Experience with database performance analysis (SQL/NoSQL) and ability to write complex SQL queries.
Familiarity with cloud platforms (AWS, Azure, GCP) and experience in testing applications deployed in cloud environments.
Proficiency in scripting languages (e.g., Groovy, Python, Shell scripting) for custom scripting and automation.
Excellent analytical, problem-solving, and debugging skills.
Strong communication (written and verbal) and interpersonal skills, with the ability to effectively collaborate with diverse teams and stakeholders.
Ability to work independently, manage multiple priorities, and thrive in a remote or hybrid work setup.
Good to Have Skills:
Experience with other performance testing tools (e.g., LoadRunner, Gatling, k6, BlazeMeter).
Knowledge of CI/CD pipelines and experience integrating performance tests into automated pipelines.
Understanding of containerization technologies (Docker, Kubernetes).
Experience with mobile application performance testing tools and techniques (e.g., device-level monitoring, network emulation).
Certifications in performance testing or cloud platforms.
Salesforce DevOps/Release Engineer
Resource type - Salesforce DevOps/Release Engineer
Experience - 5 to 8 years
Norms - PF & UAN mandatory
Resource Availability - Immediate or Joining time in less than 15 days
Job - Remote
Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)
Required Experience:
- 5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment management.
- Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
- Significant hands-on experience with at least two of the following tools: Gearset, Copado,Flosum.
- Solid understanding of Salesforce architecture, metadata, and development lifecycle.
- Familiarity with version control systems (e.g., Git) and agile methodologies
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset, Copado, or Flosum.
- Automate and optimize deployment processes to ensure efficient, reliable, and repeatable releases across Salesforce environments.
- Collaborate with development, QA, and operations teams to gather requirements and ensurealignment of deployment strategies.
- Monitor, troubleshoot, and resolve deployment and release issues.
- Maintain documentation for deployment processes and provide training on best practices.
- Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills:
- Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
- CI/CDBuilding and maintaining pipelines, automation, and release management
- Version ControlProficiency with Git and related workflows
- Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
- Scripting
- Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
- Communication
- Strong written and verbal communication skills
Preferred Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications:
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.

We are looking for an experienced and dynamic to join our team. The ideal candidate will be responsible for designing, developing, and delivering high-quality technical training programs to students.


We are looking for a dynamic and skilled Business Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in Business Analyst, Power BI, Tableau, Machine learning


We are seeking a passionate and knowledgeable Data Science and Data Analyst Trainer to deliver engaging and industry-relevant training programs. The trainer will be responsible for teaching core concepts in data analytics, machine learning, data visualization, and related tools and technologies. The ideal candidate will have hands-on experience in the data domain with 2-5 years and a flair for teaching and mentoring students or working professionals.

About Us:
Heyo & MyOperator are India’s largest conversational platforms, delivering Call + WhatsApp engagement solutions to over 40,000+ businesses. Trusted by brands like Astrotalk, Lenskart, and Caratlane, we power customer engagement at scale. We support a hybrid work model, foster a collaborative environment, and offer fast-track growth opportunities.
Job Overview:
We are looking for a skilled Quality Analyst with 2-4 years of experience in software quality assurance. The ideal candidate should have a strong understanding of testing methodologies, automation tools, and defect tracking to ensure high-quality software products. This is a fully
remote role.
Key Responsibilities:
● Develop and execute test plans, test cases, and test scripts for software products.
● Conduct manual and automated testing to ensure reliability and performance.
● Identify, document, and collaborate with developers to resolve defects and issues.
● Report testing progress and results to stakeholders and management.
● Improve automation testing processes for efficiency and accuracy.
● Stay updated with the latest QA trends, tools, and best practices.
Requirements Skills:
● 2-4 years of experience in software quality assurance.
● Strong understanding of testing methodologies and automated testing.
● Proficiency in Selenium, Rest Assured, Java, and API Testing (mandatory).
● Familiarity with Appium, JMeter, TestNG, defect tracking, and version control tools.
● Strong problem-solving, analytical, and debugging skills.
● Excellent communication and collaboration abilities.
● Detail-oriented with a commitment to delivering high-quality results.
Why Join Us?
● Fully remote work with flexible hours.
● Exposure to industry-leading technologies and practices.
● Collaborative team culture with growth opportunities.
● Work with top brands and innovative projects.

We’re seeking a passionate and skilled Technical Trainer to deliver engaging, hands-on training in HTML, CSS, and Python-based front-end development. You’ll mentor learners, design curriculum, and guide them through real-world projects to build strong foundational and practical skills.


Job Title : Senior Python Developer
Experience : 7+ Years
Location : Remote or Hybrid (Gurgaon / Coimbatore / Hyderabad)
Job Summary :
We are looking for a highly skilled and motivated Senior Python Developer to join our dynamic engineering team.
The ideal candidate will have a strong foundation in web application development using Python and related frameworks. A passion for writing clean, scalable code and solving complex technical challenges is essential for success in this role.
Mandatory Skills : Python (3.x), FastAPI or Flask, PostgreSQL or Oracle, ORM, API Microservices, Agile Methodologies, Clean Code Practices.
Required Skills and Qualifications :
- 7+ Years of hands-on experience in Python (3.x) development.
- Strong proficiency in FastAPI or Flask frameworks.
- Experience with relational databases like PostgreSQL, Oracle, or similar, along with ORM tools.
- Demonstrated experience in building and maintaining API-based microservices.
- Solid grasp of Agile development methodologies and version control practices.
- Strong analytical and problem-solving skills.
- Ability to write clean, maintainable, and well-documented code.
Nice to Have :
- Experience with Google Cloud Platform (GCP) or other cloud providers.
- Exposure to Kubernetes and container orchestration tools.


About the CryptoXpress Partner Program
Earn lifetime income just by liking posts, posting memes, art, simple threads, engaging on Twitter, Quora, Reddit, or Instagram, referral signups, commission from transactions like flight, hotel, trade, gift card etc.,
(Apply link at the bottom)
More Details:
- Student Partner Program - https://cryptoxpress.com/student-partner-program
- Ambassador Program - https://cryptoxpressambassadors.com
CryptoXpress has built two powerful tracks to help students gain experience, earn income, and launch real careers:
🌱 Growth Partner: Bring in new users, grow the network, and earn lifetime income from your referrals' transactions like trades, investments, flight/hotel/gift card purchases.
🎯 CX Ambassador: Complete creative tasks, support the brand, and get paid by liking posts, creating simple threads, memes, art, sharing your experience, and engaging on Twitter, Quora, Reddit, or Instagram.
Participants will be rewarded with payments, internship certificates, mentorship, certified Web3 learning and career opportunities.
About the Role
CryptoXpress is looking for a skilled Backend Engineer to build the core logic powering our Partner Program reward engines, task pipelines, and content validation systems. Your work will directly impact how we scale fair, fast, and fraud-proof systems for global Student Partners and CX Ambassadors.
Key Responsibilities
- Design APIs to handle submission, review, and payout logic
- Develop XP, karma, and level-up algorithms with fraud resistance
- Create content verification checkpoints (e.g., metadata checks, submission throttles)
- Handle rate limits, caching, retries, and fallback for reward processing
- Collaborate with AI and frontend engineers for seamless data flow
- debug reward or submission logic
- fix issues in task flows or XP systems
- patch verification bugs or payout edge cases
- optimize performance and API stability
Skills & Qualifications
- Proficient in Node.js, Python (Flask/FastAPI), or Go
- Solid understanding of PostgreSQL, Firebase, or equivalent databases
- Strong grasp of authentication, role-based permissions, and API security
- Bonus: Experience with reward engines, affiliate logic, or task-based platforms
- Bonus: Familiarity with moderation tooling or content scoring
Join us and play a key role in driving the growth of CryptoXpress in the cryptocurrency space!
Pro Tip: Tips for Application Success
- Please fill out the application below
- Explore CryptoXpress before applying, take 2 minutes to download and try the app so you understand what we're building
- Show your enthusiasm for crypto, travel, and digital innovation
- Mention any self-learning initiatives or personal crypto experiments
- Be honest about what you don't know - we value growth mindsets
How to Apply:
Interested candidates must complete the application form at


About the Role
At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution.
Our flagship platform is currently under development. As a Backend Engineer, you will play a foundational role in designing and building the core trading engine and research infrastructure from the ground up. Your work will focus on developing performance-critical components that power backtesting, real-time strategy execution, and seamless integration with brokers and data providers. You’ll be responsible for bridging core engine logic with Python-based strategy interfaces, supporting a modular system architecture for isolated and scalable strategy execution, and building robust abstractions for data handling and API interactions. This role is central to delivering the reliability, flexibility, and performance that our users will rely on in fast-moving financial markets.
We are a remote-first team and are open to hiring exceptional candidates globally.
Core Tasks
· Build and maintain the trading engine core for execution, backtesting, and event logging.
· Develop isolated strategy execution runners to support multi-user, multi-strategy environments.
· Implement abstraction layers for brokers and market data feeds to offer a unified API experience.
· Bridge the core engine language with Python strategies using gRPC, ZeroMQ, or similar interop technologies.
· Implement logic to parse and execute JSON-based strategy DSL from the strategy builder.
· Design compute-optimized components for multi-asset workflows and scalable backtesting.
· Capture real-time state, performance metrics, and slippage for both live and simulated runs.
· Collaborate with infrastructure engineers to support high-availability deployments.
Top Technical Competencies
· Proficiency in distributed systems, concurrency, and system design.
· Strong backend/server-side development skills using C++, Rust, C#, Erlang, or Python.
· Deep understanding of data structures and algorithms with a focus on low-latency performance.
· Experience with event-driven and messaging-based architectures (e.g., ZeroMQ, Redis Streams).
· Familiarity with Linux-based environments and system-level performance tuning.
Bonus Competencies
· Understanding of financial markets, asset classes, and algorithmic trading strategies.
· 3–5 years of prior Backend experience.
· Hands-on experience with backtesting frameworks or financial market simulators.
· Experience with sandboxed execution environments or paper trading platforms.
· Advanced knowledge of multithreading, memory optimization, or compiler construction.
· Educational background from Tier-I or Tier-II institutions with strong computer science fundamentals, a passion for scalable system design, and a drive to build cutting-edge fintech infrastructure.
What We Offer
· Opportunity to shape the backend architecture of a next-gen fintech startup.
· A collaborative, technically driven culture.
· Competitive compensation with performance-based bonuses.
· Flexible working hours and a remote-friendly environment for candidates across the globe.
· Exposure to financial modeling, trading infrastructure, and real-time applications.
· Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna.
Ideal Candidate
You’re a backend-first thinker who’s obsessed with reliability, latency, and architectural flexibility. You enjoy building scalable systems that transform complex strategy logic into high-performance, real-time trading actions. You think in microseconds, architect for fault tolerance, and build APIs designed for developer extensibility.

Founding Engineer - LITMAS
About LITMAS
LITMAS is revolutionizing litigation with the first AI-powered platform built specifically for elite litigators. We're transforming how attorneys research, strategize, draft, and win cases by combining comprehensive case repositories with cutting-edge AI validation and workflow automation. We are a team incubated by experienced litigators, building the future of legal technology.
The Opportunity
We're seeking a Founding Engineer to join our core team and shape the technical foundation of LITMAS. This is a rare opportunity to build a category-defining product from the ground up, working directly with the founders to create technology that will transform the US litigation market.
As a founding engineer, you'll have significant ownership over our technical architecture, product decisions, and company culture. Your code will directly impact how thousands of attorneys practice law.
What You'll Do
- Architect and build core platform features using Python, Node.js, Next.js, React, and MongoDB
- Design and implement production-grade LLM systems with advanced tool usage, RAG pipelines, and agent architectures
- Build AI workflows that combine multiple tools for legal research, validation, and document analysis
- Create scalable RAG infrastructure to handle thousands of legal documents with high accuracy
- Implement AI tool chains to provide agents tool inputs
- Design intuitive interfaces that make complex legal workflows simple and powerful
- Own end-to-end features from conception through deployment and iteration
- Establish engineering best practices for AI systems including evaluation, monitoring, and safety
- Collaborate directly with founders on product strategy and technical roadmap
The Ideal Candidate
You're not just an AI engineer, you're someone who understands how to build reliable, production-grade AI systems that users can trust. You've wrestled with RAG accuracy, tool reliability, and LLM hallucinations in production. You know the difference between a demo and a system that handles real-world complexity. You're excited about applying AI to transform how legal professional’s work.
What We're Looking For
Must-Haves
- Deployed production-grade LLM applications with demonstrable experience in:
- Tool usage and function calling
- RAG (Retrieval-Augmented Generation) implementation at scale
- Agent architectures and multi-step reasoning
- Prompt engineering and optimization
- Knowledge of multiple LLM providers (OpenAI, Anthropic, Cohere, open-source models)
- Background in building AI evaluation and monitoring systems
- Experience with document processing and OCR technologies
- 3+ years of production experience with Node.js, Python, Next.js, and React
- Strong MongoDB expertise including schema design and optimization
- Experience with vector databases (Pinecone, Weaviate, Qdrant, or similar)
- Full-stack mindset with ability to own features from database to UI
- Track record of shipping complex web applications at scale
- Deep understanding of LLM limitations, hallucination prevention, and validation techniques
Tech Stack
- Backend: Node.js, Express, MongoDB
- Frontend: Next.js, React, TypeScript, Modern CSS
- AI/ML: LangChain/LlamaIndex, OpenAI/Anthropic APIs, vector databases, custom AI tools
- Additional: Document processing, search infrastructure, real-time collaboration
What We Offer
- Significant equity stake true ownership in the company you're building
- Competitive compensation commensurate with experience
- Direct impact your decisions shape the product and company
- Learning opportunity work with cutting-edge AI and legal technology
- Flexible work remote-first with a global team
- AI resources access to latest models and compute resources
Interview Process
One more thing: Our process includes deep technical interviews and fit conversations. As part of the evaluation, there will be an extensive take-home test that should expect to take at least 4-5 hours depending on your skill level. This allows us to see how you approach real problems similar to what you'll encounter at LITMAS.

Job Title: BigID Deployment Lead/ SME
Duration: 6+ Months
Exp. Level: 8-12yrs
Job Summary:
We are seeking a highly skilled and experienced BigID Deployment Lead / Subject Matter Expert (SME) to lead the implementation, configuration, and optimization of BigID's data intelligence platform. The ideal candidate will have deep expertise in data discovery, classification, privacy, and governance, and will play a pivotal role in ensuring successful deployment and integration of BigID solutions across enterprise environments.
Key Responsibilities:
Lead end-to-end deployment and configuration of BigID solutions in complex enterprise environments.
Serve as the primary SME for BigID, advising stakeholders on best practices, architecture, and integration strategies.
Collaborate with cross-functional teams including security, compliance, data governance, and IT to align BigID capabilities with business requirements.
Customize and fine-tune BigID policies, connectors, and scanning configurations to meet data privacy and compliance objectives (e.g., GDPR, CCPA, HIPAA).
Conduct workshops, training sessions, and knowledge transfers for internal teams and clients.
Troubleshoot and resolve technical issues related to BigID deployment, performance, and data discovery.
Stay current with BigID product updates, industry trends, and regulatory changes to ensure continuous improvement and compliance.
Required Qualifications:
Bachelor's or Master's degree in Computer Science, Information Technology, Cybersecurity, or a related field.
5+ years of experience in data governance, privacy, or security domains.
2+ years of hands-on experience with BigID platform deployment and configuration.
Strong understanding of data classification, metadata management, and data mapping.
Experience with cloud platforms (AWS, Azure, GCP) and integrating BigID with cloud-native services.
Familiarity with data privacy regulations (GDPR, CCPA, etc.) and risk management frameworks.
Excellent communication, documentation, and stakeholder management skills.
Preferred Qualifications:
BigID certification(s) or formal training.
Experience with scripting (Python, PowerShell) and API integrations.
Background in enterprise data architecture or data security.
- Experience working in Agile/Scrum environments.

Description
Job Description:
Company: Springer Capital
Type: Internship (Remote, Part-Time/Full-Time)
Duration: 3–6 months
Start Date: Rolling
Compensation:
About the role:
We’re building high-performance backend systems that power our financial and ESG intelligence platforms and we want you on the team. As a Backend Engineering Intern, you’ll help us develop scalable APIs, automate data pipelines, and deploy secure cloud infrastructure. This is your chance to work alongside experienced engineers, contribute to real products, and see your code go live.
What You'll Work On:
As a Backend Engineering Intern, you’ll be shaping the systems that power financial insights.
Engineering scalable backend services in Python, Node.js, or Go
Designing and integrating RESTful APIs and microservices
Working with PostgreSQL, MongoDB, or Redis for data persistence
Deploying on AWS/GCP, using Docker, and learning Kubernetes on the fly
Automating infrastructure and shipping faster with CI/CD pipelines
Collaborating with a product-focused team that values fast iteration
What We’re Looking For:
A builder mindset – you like writing clean, efficient code that works
Strong grasp of backend languages (Python, Java, Node, etc.)
Understanding of cloud platforms and containerization basics
Basic knowledge of databases and version control
Students or self-taught engineers actively learning and building
Preferred skills:
Experience with serverless or event-driven architectures
Familiarity with DevOps tools or monitoring systems
A curious mind for AI/ML, fintech, or real-time analytics
What You’ll Get:
Real-world experience solving core backend problems
Autonomy and ownership of live features
Mentorship from engineers who’ve built at top-tier startups
A chance to grow into a full-time offer



About the Role:
We are looking for a Senior Technical Customer Success Manager to join our growing team. This is a client-facing role focused on ensuring successful adoption and value realization of our SaaS solutions. The ideal candidate will come from a strong analytics background, possess hands-on skills in SQL and Python or R, and have experience working with dashboarding tools. Prior experience in eCommerce or retail domains is a strong plus.
Responsibilities:
- Own post-sale customer relationship and act as the primary technical point of contact.
- Drive product adoption and usage through effective onboarding, training, and ongoing support.
- Work closely with clients to understand business goals and align them with product capabilities.
- Collaborate with internal product, engineering, and data teams to deliver solutions and enhancements tailored to client needs.
- Analyze customer data and usage trends to proactively identify opportunities and risks.
- Build dashboards or reports for customers using internal tools or integrations.
- Lead business reviews, share insights, and communicate value delivered.
- Support customers in configuring rules, data integrations, and troubleshooting issues.
- Drive renewal and expansion by ensuring customer satisfaction and delivering measurable outcomes.
Requirements:
- 7+ years of experience in a Customer Success, Technical Account Management, or Solution Consulting role in a SaaS or software product company.
- Strong SQL skills and working experience with Python or R.
- Experience with dashboarding tools such as Tableau, Power BI, Looker, or similar.
- Understanding of data pipelines, APIs, and data modeling.
- Excellent communication and stakeholder management skills.
- Proven track record of managing mid to large enterprise clients.
- Experience in eCommerce, retail, or consumer-facing businesses is highly desirable.
- Ability to translate technical details into business context and vice versa.
- Bachelor’s or Master’s degree in Computer Science, Analytics, Engineering, or related field.
Nice to Have:
- Exposure to machine learning workflows, recommendation systems, or pricing analytics.
- Familiarity with cloud platforms (AWS/GCP/Azure).
- Experience working with cross-functional teams in Agile environments.

Required Skills:
• Basic understanding of machine learning concepts and algorithms
• Proficiency in Python and relevant libraries (NumPy, Pandas, scikit-learn)
• Familiarity with data preprocessing techniques
• Knowledge of basic statistical concepts
• Understanding of model evaluation metrics
• Basic experience with at least one deep learning framework (TensorFlow, PyTorch)
• Strong analytical and problem-solving abilities
Application Process: Create your profile on our platform, submit your portfolio, GitHub profile, or sample projects.

Primary skill set: QA Automation, Python, BDD, SQL
As Senior Data Quality Engineer you will:
- Evaluate product functionality and create test strategies and test cases to assess product quality.
- Work closely with the on-shore and the offshore team.
- Work on multiple reports validation against the databases by running medium to complex SQL queries.
- Better understanding of Automation Objects and Integrations across various platforms/applications etc.
- Individual contributor exploring opportunities to improve performance and suggest/articulate the areas of improvements importance/advantages to management.
- Integrate with SCM infrastructure to establish a continuous build and test cycle using CICD tools.
- Comfortable working on Linux/Windows environment(s) and Hybrid infrastructure models hosted on Cloud platforms.
- Establish processes and tools set to maintain automation scripts and generate regular test reports.
- Peer review to provide feedback and to make sure the test scripts are flaw-less.
Core/Must have skills:
- Excellent understanding and hands on experience in ETL/DWH testing preferably DataBricks paired with Python experience.
- Hands on experience SQL (Analytical Functions and complex queries) along with knowledge of using SQL client utilities effectively.
- Clear & crisp communication and commitment towards deliverables
- Experience on BigData Testing will be an added advantage.
- Knowledge on Spark and Scala, Hive/Impala, Python will be an added advantage.
Good to have skills:
- Test automation using BDD/Cucumber / TestNG combined with strong hands-on experience with Java with Selenium. Especially working experience in WebDriver.IO
- Ability to effectively articulate technical challenges and solutions
- Work experience in qTest, Jira, WebDriver.IO



Title - Pncpl Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
Principal Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React. And suggest optimisations based on them
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 8-10 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.

About Us:
MyOperator and Heyo are India’s leading conversational platforms empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one


LendFlow is an AI-powered home loan assessment platform that helps mortgage brokers and lenders save hours by automating document analysis, income validation, and serviceability assessment. We turn complex financial documents into clear insights—fast.
We’re building a smart assistant that ingests client docs (bank statements, payslips, loan summaries) and uses modular AI agents to extract, classify, and summarize financial data in minutes, not hours. Think OCR + AI agents + compliance-ready outputs.
🛠️ What You’ll Be Building
As part of our early technical team, you’ll help us develop and launch our MVP. Key modules include:
- Document ingestion and OCR processing (Textract, Document AI)
- AI agent workflows using LangChain or CrewAI
- Serviceability calculators with business rule engines
- React + Next.js frontend for brokers and analysts
- FastAPI backend with PostgreSQL
- Security, encryption, audit logging (privacy-first design)
🎯 We’re Looking For:
Must-Have Skills:
- Strong experience with Python (FastAPI, OCR, LLMs, prompt engineering)
- Familiarity with AI agent frameworks (LangChain, CrewAI, Autogen, or similar)
- Frontend skills in React.js / Next.js
- Experience with PostgreSQL and cloud storage (AWS/GCP)
- Understanding of financial documents and data privacy best practices
Bonus Points:
- Experience with OCR tools like Amazon Textract, Tesseract, or Document AI
- Building ML/NLP pipelines in real-world apps
- Prior work in fintech, lending, or proptech sectors

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML, and advanced decision-making capabilities to drive real-time business insights. Built from the ground up using modern technologies, Hypersonix simplifies data consumption for customers across various industry verticals. We are seeking a well-rounded, hands-on product leader to help manage key capabilities and features in our platform.
Position Overview
We are seeking a highly skilled Web Scraping Architect to join our team. The successful candidate will be responsible for designing, implementing, and maintaining web scraping processes to gather data from various online sources efficiently and accurately. As a Web Scraping Specialist, you will play a crucial role in collecting data for competitor analysis and other business intelligence purposes.
Responsibilities
- Scalability/Performance: Lead and provide expertise in scraping at scale e-commerce marketplaces.
- Data Source Identification: Identify relevant websites and online sources from which data needs to be scraped. Collaborate with the team to understand data requirements and objectives.
- Web Scraping Design: Develop and implement effective web scraping strategies to extract data from targeted websites. This includes selecting appropriate tools, libraries, or frameworks for the task.
- Data Extraction: Create and maintain web scraping scripts or programs to extract the required data. Ensure the code is optimized, reliable, and can handle changes in the website's structure.
- Data Cleansing and Validation: Cleanse and validate the collected data to eliminate errors, inconsistencies, and duplicates. Ensure data integrity and accuracy throughout the process.
- Monitoring and Maintenance: Continuously monitor and maintain the web scraping processes. Address any issues that arise due to website changes, data format modifications, or anti-scraping mechanisms.
- Scalability and Performance: Optimize web scraping procedures for efficiency and scalability, especially when dealing with a large volume of data or multiple data sources.
- Compliance and Legal Considerations: Stay up-to-date with legal and ethical considerations related to web scraping, including website terms of service, copyright, and privacy regulations.
- Documentation: Maintain detailed documentation of web scraping processes, data sources, and methodologies. Create clear and concise instructions for others to follow.
- Collaboration: Collaborate with other teams such as data analysts, developers, and business stakeholders to understand data requirements and deliver insights effectively.
- Security: Implement security measures to ensure the confidentiality and protection of sensitive data throughout the scraping process.
Requirements
- Proven experience of 7+ years as a Web Scraping Specialist or similar role, with a track record of successful web scraping projects
- Expertise in handling dynamic content, user-agent rotation, bypassing CAPTCHAs, rate limits, and use of proxy services
- Knowledge of browser fingerprinting
- Has leadership experience
- Proficiency in programming languages commonly used for web scraping, such as Python, BeautifulSoup, Scrapy, or Selenium
- Strong knowledge of HTML, CSS, XPath, and other web technologies relevant to web scraping and coding
- Knowledge and experience in best-of-class data storage and retrieval for large volumes of scraped data
- Understanding of web scraping best practices, including handling dynamic content, user-agent rotation, and IP address management
- Attention to detail and ability to handle and process large volumes of data accurately
- Familiarity with data cleansing techniques and data validation processes
- Good communication skills and ability to collaborate effectively with cross-functional teams
- Knowledge of web scraping ethics, legal considerations, and compliance with website terms of service
- Strong problem-solving skills and adaptability to changing web environments
Preferred Qualifications
- Bachelor’s degree in Computer Science, Data Science, Information Technology, or related fields
- Experience with cloud-based solutions and distributed web scraping systems
- Familiarity with APIs and data extraction from non-public sources
- Knowledge of machine learning techniques for data extraction and natural language processing is desired but not mandatory
- Prior experience in handling large-scale data projects and working with big data frameworks
- Understanding of various data formats such as JSON, XML, CSV, etc.
- Experience with version control systems like Git

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.


We’re building a powerful, AI-driven communication platform — a next-generation alternative to RingCentral or 8x8 — powered by OpenAI, LangChain, and SIP/WebRTC. We're looking for a Full-Stack Software Developer who’s passionate about building real-time, AI-enabled voice infrastructure and who’s excited to work in a fast-moving, founder-led environment.
This is an opportunity to build from scratch, take ownership of core systems, and innovate on the edge of VoIP + AI.
What You’ll Do
- Design and build AI-driven voice and messaging features (e.g. smart IVRs, call transcription, virtual agents)
- Develop backend services using Python, Node.js, or Golang
- Integrate OpenAI, Whisper, and LangChain with real-time VoIP systems like Twilio, SIP, or WebRTC
- Create scalable APIs, handle call logic, and build AI pipelines
- Collaborate with the founder and early team on product strategy and infrastructure
- Participate in occasional in-person strategy meetings (Delhi, Bangalore, or nearby)
Must-Have Skills
- Strong programming experience in Python, Node.js, or Go
- Hands-on experience with VoIP/SIP, WebRTC, or tools like Twilio, Asterisk, Plivo
- Experience integrating with LLM APIs, OpenAI, or speech-to-text models
- Solid understanding of backend design, Docker, Redis, PostgreSQL
- Ability to work independently and deliver production-grade code
Nice to Have
- Familiarity with LangChain or agent-based AI systems
- Knowledge of call routing logic, STUN/TURN, or media servers (e.g. FreeSWITCH)
- Interest in building scalable cloud-first SaaS products
Work Setup
- 🏠 Remote work (India-based, must be reachable for meetings)
- 🕐 Full-time role
- 💼 Direct collaboration with founder (technical)
- 🧘♂️ Flexible hours, strong ownership culture

Senior Generative AI Engineer
Job Id: QX016
About Us:
The QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for the enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights; businesses will continue to face challenges to better understand their customers and even lose them.
Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Job Summary:
We seek a highly experienced Senior Generative AI Engineer who focus on the development, implementation, and engineering of Gen AI applications using the latest LLMs and frameworks. This role requires hands-on expertise in Python programming, cloud platforms, and advanced AI techniques, along with additional skills in front-end technologies, data modernization, and API integration. The Senior Gen AI engineer will be responsible for building applications from the ground up, ensuring robust, scalable, and efficient solutions.
Responsibilities:
· Build GenAI solutions such as virtual assistant, data augmentation, automated insights and predictive analytics
· Design, develop, and fine-tune generative AI models (GANs, VAEs, Transformers).
· Handle data preprocessing, augmentation, and synthetic data generation.
· Work with NLP, text generation, and contextual comprehension tasks.
· Develop backend services using Python or .NET for LLM-powered applications.
· Build and deploy AI applications on cloud platforms (Azure, AWS, GCP).
· Optimize AI pipelines and ensure scalability.
· Stay updated with advancements in AI and ML.
Skills & Requirements:
- Strong knowledge of machine learning, deep learning, and NLP.
- Proficiency in Python, TensorFlow, PyTorch, and Keras.
- Experience with cloud services, containerization (Docker, Kubernetes), and AI model deployment.
- Understanding of LLMs, embeddings, and retrieval-augmented generation (RAG).
- Ability to work independently and as part of a team.
- Bachelor’s degree in Computer Science, Mathematics, Engineering, or a related field.
- 6+ years of experience in Gen AI, or related roles.
- Experience with AI/ML model integration into data pipelines.
Core Competencies for Generative AI Engineers:
1. Programming & Software Development
a. Python – Proficiency in writing efficient and scalable code with strong knowledge with NumPy, Pandas, TensorFlow, PyTorch and Scikit-learn.
b. LLM Frameworks – Experience with Hugging Face Transformers, LangChain, OpenAI API, and similar tools for building and deploying large language models.
c. API integration such as FastAPI, Flask, RESTful API, WebSockets or Django.
d. Knowledge of Version Control, containerization, CI/CD Pipelines and Unit Testing.
2. Vector Database & Cloud AI Solutions
a. Pinecone, FAISS, ChromaDB, Neo4j
b. Azure Redis/ Cognitive Search
c. Azure OpenAI Service
d. Azure ML Studio Models
e. AWS (Relevant Services)
3. Data Engineering & Processing
- Handling large-scale structured & unstructured datasets.
- Proficiency in SQL, NoSQL (PostgreSQL, MongoDB), Spark, and Hadoop.
- Feature engineering and data augmentation techniques.
4. NLP & Computer Vision
- NLP: Tokenization, embeddings (Word2Vec, BERT, T5, LLaMA).
- CV: Image generation using GANs, VAEs, Stable Diffusion.
- Document Embedding – Experience with vector databases (FAISS, ChromaDB, Pinecone) and embedding models (BGE, OpenAI, SentenceTransformers).
- Text Summarization – Knowledge of extractive and abstractive summarization techniques using models like T5, BART, and Pegasus.
- Named Entity Recognition (NER) – Experience in fine-tuning NER models and using pre-trained models from SpaCy, NLTK, or Hugging Face.
- Document Parsing & Classification – Hands-on experience with OCR (Tesseract, Azure Form Recognizer), NLP-based document classifiers, and tools like LayoutLM, PDFMiner.
5. Model Deployment & Optimization
- Model compression (quantization, pruning, distillation).
- Deployment using Azure CI/CD, ONNX, TensorRT, OpenVINO on AWS, GCP.
- Model monitoring (MLflow, Weights & Biases) and automated workflows (Azure Pipeline).
- API integration with front-end applications.
6. AI Ethics & Responsible AI
- Bias detection, interpretability (SHAP, LIME), and security (adversarial attacks).
7. Mathematics & Statistics
- Linear Algebra, Probability, and Optimization (Gradient Descent, Regularization, etc.).
8. Machine Learning & Deep Learning
a. Expertise in supervised, unsupervised, and reinforcement learning.
a. Proficiency in TensorFlow, PyTorch, and JAX.
b. Experience with Transformers, GANs, VAEs, Diffusion Models, and LLMs (GPT, BERT, T5).
Personal Attributes:
- Strong problem-solving skills with a passion for data architecture.
- Excellent communication skills with the ability to explain complex data concepts to non-technical stakeholders.
- Highly collaborative, capable of working with cross-functional teams.
- Ability to thrive in a fast-paced, agile environment while managing multiple priorities effectively.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.
Ready to make an impact? Apply today and become part of the QX impact team!


Role Overview
We are seeking a skilled Odoo Consultant with Python development expertise to support the design, development, and implementation of Odoo-based business solutions for our clients. The consultant will work on module customization, backend logic, API integrations, and configuration of business workflows using the Odoo framework.
Key Responsibilities
● Customize and extend Odoo modules based on client requirements
● Develop backend logic using Python and the Odoo ORM
● Configure business workflows, access rights, and approval processes
● Create and update views using XML and QWeb for reports and screens
● Integrate third-party systems using Odoo APIs (REST, XML-RPC)
● Participate in client discussions and translate business needs into technical solutions
● Support testing, deployment, and user training as required
Required Skills
● Strong knowledge of Python and Odoo framework (v12 and above)
● Experience working with Odoo models, workflows, and security rules
● Good understanding of XML, QWeb, and PostgreSQL
● Experience in developing or integrating APIs
● Familiarity with Git and basic Linux server operations
● Good communication and documentation skills
Preferred Qualifications
● Experience in implementing Odoo for industries such as manufacturing, retail, financial
services, or real estate
● Ability to work independently and manage project timelines
● Bachelor’s degree in Computer Science, Engineering, or related field

Job Description For Associate Database Engineer (PostgreSQL)
Job Title: Associate Database Engineer (PostgreSQL)
Company: Mydbops
About us:
As a seasoned industry leader for 8 years in open-source database management, we specialize in providing
unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At
Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers.
Mydbops takes pride in being a PCI DSS-certified and ISO-certified company, reflecting our unwavering
commitment to maintaining the highest security and operational excellence standards.
Position Overview:
An Associate Database Engineer is responsible for the administration and monitoring of database systems and is
available to work in shifts.
Responsibilities
● Managing and maintaining various customer database environments.
● Monitoring proactively database performance using internal tools and metrics.
● Implementing backup and recovery procedures.
● Ensuring data security and integrity.
● Troubleshooting database issues with a focus on internal diagnostics.
● Assisting with capacity planning and system upgrades.
● This role requires a solid understanding of database management systems, proficiency in using internal tools for performance monitoring, and flexibility to work in various shifts to ensure continuous database support.
Requirements
● Good knowledge of Linux OS and its tools
● Strong expertise in PostgreSQL database administration
● Proficient in SQL and any programming languages (Python, bash)
● Hands-on experience with database backups, recovery, upgrades, replication and clustering
● Troubleshooting of database issues
● Familiarity with Cloud (AWS/GCP)
● Working knowledge of AWS RDS, Aurora, CloudSQL
● Strong communication skills
● Ability to work effectively in a team environment
Preferred Qualifications:
● B.Tech/M.Tech or any equivalent degree
● Deeper understanding of databases and Linux troubleshooting
● Working knowledge of upgrades and availability solutions
● Working knowledge of backup tools like pg backrest/barman
● Good knowledge of query optimisation and index types
● Experience with database monitoring and management tools.
● Certifications on PostgreSQL or related technologies are a plus
● Prior experience in customer support or technical operations
Why Join Us:
● Opportunity to work in a dynamic and growing industry.
● Learning and development opportunities to enhance your career.
● A collaborative work environment with a supportive team.
Job Details:
● Job Type: Full-time
● Work Mode: Work From Home
● Experience: 1-3 years