50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!


We are looking for a Senior Software Engineer to join our team and contribute to key business functions. The ideal candidate will bring relevant experience, strong problem-solving skills, and a collaborative
mindset.
Responsibilities:
- Design, build, and maintain high-performance systems using modern C++
- Architect and implement containerized services using Docker, with orchestration via Kubernetes or ECS
- Build, monitor, and maintain data ingestion, transformation, and enrichment pipelines
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and
- managing applications in the cloud.
- Implement and maintain modern CI/CD pipelines, ensuring seamless integration, testing, and delivery
- Participate in system design, peer code reviews, and performance tuning
Qualifications:
- 5+ years of software development experience, with strong command over modern C++
- Deep understanding of cloud platforms (preferably AWS) and hands-on experience in deploying and managing applications in the cloud.
- Apache Airflow for orchestrating complex data workflows.
- EKS (Elastic Kubernetes Service) for managing containerized workloads.
- Proven expertise in designing and managing robust data pipelines & Microservices.
- Proficient in building and scaling data processing workflows and working with structured/unstructured data
- Strong hands-on experience with Docker, container orchestration, and microservices architecture
- Working knowledge of CI/CD practices, Git, and build/release tools
- Strong problem-solving, debugging, and cross-functional collaboration skills
This position description is intended to describe the duties most frequently performed by an individual in this position. It is not intended to be a complete list of assigned duties but to describe a position level.


At Palcode.ai, We're on a mission to fix the massive inefficiencies in pre-construction. Think about it - in a $10 trillion industry, estimators still spend weeks analyzing bids, project managers struggle with scattered data, and costly mistakes slip through complex contracts. We're fixing this with purpose-built AI agents that work. Our platform can do “magic” to Preconstruction workflows from Weeks to Hours. It's not just about AI – it's about bringing real, measurable impact to an industry ready for change. We are backed by names like AWS for Startups, Upekkha Accelerator, and Microsoft for Startups.
Why Palcode.ai
Tackle Complex Problems: Build AI that reads between the lines of construction bids, spots hidden risks in contracts, and makes sense of fragmented project data
High-Impact Code: Your code won't sit in a backlog – it goes straight to estimators and project managers who need it yesterday
Tech Challenges That Matter: Design systems that process thousands of construction documents, handle real-time pricing data, and make intelligent decisions
Build & Own: Shape our entire tech stack, from data processing pipelines to AI model deployment
Quick Impact: Small team, huge responsibility. Your solutions directly impact project decisions worth millions
Learn & Grow: Master the intersection of AI, cloud architecture, and construction tech while working with founders who've built and scaled construction software
Your Role:
- Design and build our core AI services and APIs using Python
- Create reliable, scalable backend systems that handle complex data
- Help set up cloud infrastructure and deployment pipelines
- Collaborate with our AI team to integrate machine learning models
- Write clean, tested, production-ready code
You'll fit right in if:
- You have 1 year of hands-on Python development experience
- You're comfortable with full-stack development and cloud services
- You write clean, maintainable code and follow good engineering practices
- You're curious about AI/ML and eager to learn new technologies
- You enjoy fast-paced startup environments and take ownership of your work
How we will set you up for success
- You will work closely with the Founding team to understand what we are building.
- You will be given comprehensive training about the tech stack, with an opportunity to avail virtual training as well.
- You will be involved in a monthly one-on-one with the founders to discuss feedback
- A unique opportunity to learn from the best - we are Gold partners of AWS, Razorpay, and Microsoft Startup programs, having access to rich talent to discuss and brainstorm ideas.
- You’ll have a lot of creative freedom to execute new ideas. As long as you can convince us, and you’re confident in your skills, we’re here to back you in your execution.
Location: Bangalore, Remote
Compensation: Competitive salary + Meaningful equity
If you get excited about solving hard problems that have real-world impact, we should talk.
All the best!!
We’re seeking a highly skilled, execution-focused Data Scientist with 4–10 years of experience to join our team. This role demands hands-on expertise in fine-tuning and deploying generative AI models across image, video, and audio domains — with a special focus on lip-sync, character consistency, and automated quality evaluation frameworks. You will be expected to run rapid experiments, test architectural variations, and deliver working model iterations quickly in a high-velocity R&D environment.
Responsibilities
- Run end-to-end fine-tuning experiments on state-of-the-art models (Flux family, LoRA, diffusion-based architectures, context-based composition).
- Develop and optimize generative AI models for audio generation and lip-sync, ensuring high fidelity and natural delivery.
- Extend current language models to support regional Indian languages beyond US/UK English for audio and content generation.
- Enable emotional delivery in generated audio (shouting, crying, whispering) to enhance realism.
- Integrate and synchronize background scores seamlessly with generated video content.
- Work towards achieving video quality standards comparable to Veo3/Sora.
- Ensure consistency in scenes and character generation across multiple outputs.
- Design and implement an automated objective evaluation frameworks to replace subjective human review — for cover images, video frames, and audio clips. Implement scoring systems that standardize quality checks before editorial approval.
- Run comparative tests across multiple model architectures to evaluate trade-offs in quality, speed, and efficiency.
- Drive initiatives independently, showcasing high agency and accountability. Utilize strong first-principle thinking to tackle complex challenges.
- Apply a research-first approach with rapid experimentation in the fast-evolving Generative AI space.
Requirements
- 4-10 years of experience in Data Science, with a strong focus on Generative AI.
- Familiarity with state-of-the-art models in generative AI (e.g., Flux, diffusion models, GANs).
- Proven expertise in developing and deploying models for audio and video generation.
- Demonstrated experience with natural language processing (NLP), especially for regional language adaptation.
- Experience with model fine-tuning and optimization techniques.
- Hands-on exposure to ML deployment pipelines (FastAPI or equivalent).
- Strong programming skills in Python and relevant deep learning frameworks (e.g., TensorFlow, PyTorch).
- Experience in designing and implementing automated evaluation metrics for generative content.
- A portfolio or demonstrable experience in projects related to content generation, lip-sync, or emotional AI is a plus.
- Exceptional problem-solving skills and a proactive approach to research and experimentation.
Benefits
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About us
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.


About the Job
AI/ML Engineer
Experience: 1–5 Years
Salary: Competitive
Preferred Notice Period: Immediate to 30 Days
Opportunity Type: Remote (Global)
Placement Type: Freelance/Contract
(Note: This is a requirement for one of TalentLo’s Clients)
About TalentLo
TalentLo is a revolutionary talent platform connecting exceptional tech professionals with high-quality clients worldwide. We’re building a carefully curated pool of skilled experts to match with companies actively seeking specialized talent for impactful projects.
Role Overview
We’re seeking experienced AI/ML Engineers with 1–5 years of professional experience to design, build, and deploy practical machine learning solutions. This is a freelance/contract opportunity where you’ll work remotely with global clients on innovative AI-driven projects that create real-world business impact.
Responsibilities
- Design and implement end-to-end machine learning solutions
- Select and apply appropriate algorithms/models for business problems
- Build and optimize data pipelines and feature engineering workflows
- Deploy, monitor, and scale ML models in production environments
- Ensure performance optimization and scalability of solutions
- Translate business requirements into ML/AI specifications
- Collaborate with cross-functional teams to integrate ML solutions
- Communicate technical concepts clearly to non-technical stakeholders
Requirements
- 1–5 years of professional AI/ML development experience
- Strong proficiency in Python and ML frameworks (TensorFlow, PyTorch, scikit-learn)
- Hands-on experience deploying models into production environments
- Solid knowledge of feature engineering and data preprocessing techniques
- Experience with cloud ML services (AWS Sagemaker, GCP Vertex AI, Azure ML)
- Understanding of statistical concepts, validation methods, and ML evaluation metrics
- Familiarity with data engineering workflows and data pipelines
- Version control and collaboration experience (Git, GitHub, GitLab)
How to Apply
- Create your profile on TalentLo’s platform → https://www.talentlo.com/signup
- Submit your GitHub, portfolio, or sample AI/ML projects
- Get shortlisted & connect with the client
✨ If you’re ready to work on cutting-edge AI projects, collaborate with global teams, and take your career to the next level — apply today!




Title - Pncpl Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
Principal Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React. And suggest optimisations based on them
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 8-10 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.

About Ven Analytics
At Ven Analytics, we don’t just crunch numbers — we decode them to uncover insights that drive real business impact. We’re a data-driven analytics company that partners with high-growth startups and enterprises to build powerful data products, business intelligence systems, and scalable reporting solutions. With a focus on innovation, collaboration, and continuous learning, we empower our teams to solve real-world business problems using the power of data.
Role Overview
We’re looking for a Power BI Data Engineer who is not just proficient in tools but passionate about building insightful, scalable, and high-performing dashboards. The ideal candidate should have strong fundamentals in data modeling, a flair for storytelling through data, and the technical skills to implement robust data solutions using Power BI, Python, and SQL.
Key Responsibilities
- Technical Expertise: Develop scalable, accurate, and maintainable data models using Power BI, with a clear understanding of Data Modeling, DAX, Power Query, and visualization principles.
- Programming Proficiency: Use SQL and Python for complex data manipulation, automation, and analysis.
- Business Problem Translation: Collaborate with stakeholders to convert business problems into structured data-centric solutions considering performance, scalability, and commercial goals.
- Hypothesis Development: Break down complex use-cases into testable hypotheses and define relevant datasets required for evaluation.
- Solution Design: Create wireframes, proof-of-concepts (POC), and final dashboards in line with business requirements.
- Dashboard Quality: Ensure dashboards meet high standards of data accuracy, visual clarity, performance, and support SLAs.
- Performance Optimization: Continuously enhance user experience by improving performance, maintainability, and scalability of Power BI solutions.
- Troubleshooting & Support: Quick resolution of access, latency, and data issues as per defined SLAs.
Must-Have Skills
- Strong experience building robust data models in Power BI
- Hands-on expertise with DAX (complex measures and calculated columns)
- Proficiency in M Language (Power Query) beyond drag-and-drop UI
- Clear understanding of data visualization best practices (less fluff, more insight)
- Solid grasp of SQL and Python for data processing
- Strong analytical thinking and ability to craft compelling data stories
Good-to-Have (Bonus Points)
- Experience using DAX Studio and Tabular Editor
- Prior work in a high-volume data processing production environment
- Exposure to modern CI/CD practices or version control with BI tools
Why Join Ven Analytics?
- Be part of a fast-growing startup that puts data at the heart of every decision.
- Opportunity to work on high-impact, real-world business challenges.
- Collaborative, transparent, and learning-oriented work environment.
- Flexible work culture and focus on career development.

About the Job
Data Engineer
Experience: 1–5 Years
Salary: Competitive
Preferred Notice Period: Immediate to 30 Days
Opportunity Type: Remote (Global)
Placement Type: Freelance/Contract
(Note: This is a requirement for one of TalentLo’s Clients)
Role Overview
We’re seeking skilled Data Engineers with 1–5 years of professional experience to design, build, and optimize data pipelines and infrastructure. This is a freelance/contract opportunity where you’ll work remotely with global clients, contribute to innovative projects, and expand your expertise in modern data engineering.
Responsibilities
- Design, build, and maintain scalable ETL/ELT pipelines
- Write efficient SQL queries and scripts for data extraction and transformation
- Use Python (or similar languages) for data manipulation and automation
- Integrate and manage structured and unstructured data from diverse sources
- Collaborate with cross-functional teams to ensure data quality, consistency, and reliability
- Optimize data workflows for performance and scalability
- Stay updated with modern data engineering tools and best practices
Requirements
- 1–5 years of experience in data engineering or related roles
- Strong SQL fundamentals and query optimization skills
- Proficiency in Python (or a similar programming language)
- Hands-on experience with relational and NoSQL databases
- Familiarity with ETL/ELT processes and data pipeline tools
- Knowledge of cloud platforms (AWS, GCP, or Azure) is a plus
- Strong problem-solving skills and attention to detail
How to Apply
- Create your profile on TalentLo’s platform → https://www.talentlo.com/signup
- Submit your GitHub, portfolio, or sample projects
- Get shortlisted & connect with the client
About TalentLo
TalentLo is a global platform connecting exceptional tech talent with high-quality clients worldwide. We’re building a select pool of talented professionals to match with companies actively seeking specialized expertise.
✨ If you’re ready to take on exciting data projects, collaborate with global teams, and accelerate your career — apply today!

Certa (getcerta.com) is a Silicon Valley-based startup automating the vendor, supplier, and stakeholder onboarding processes for businesses globally. Serving Fortune 500 and Fortune 1000 clients, Certa's engineering team tackles expansive and deeply technical challenges, driving innovation in business processes across industries.
Location: Remote (India only)
Role Overview
We are looking for an experienced and innovative AI Engineer to join our team and push the boundaries of large language model (LLM) technology to drive significant impact in our products and services . In this role, you will leverage your strong software engineering skills (particularly in Python and cloud-based backend systems) and your hands-on experience with cutting-edge AI (LLMs, prompt engineering, Retrieval-Augmented Generation, etc.) to build intelligent features for enterprise (B2B SaaS). As an AI Engineer on our team, you will design and deploy AI-driven solutions (such as LLM-powered agents and context-aware systems) from prototype to production, iterating quickly and staying up-to-date with the latest developments in the AI space . This is a unique opportunity to be at the forefront of a new class of engineering roles that blend robust backend system design with state-of-the-art AI integration, shaping the future of user experiences in our domain.
Key Responsibilities
- Design and Develop AI Features: Lead the design, development, and deployment of generative AI capabilities and LLM-powered services that deliver engaging, human-centric user experiences . This includes building features like intelligent chatbots, AI-driven recommendations, and workflow automation into our products.
- RAG Pipeline Implementation: Design, implement, and continuously optimize end-to-end RAG (Retrieval-Augmented Generation) pipelines, including data ingestion and parsing, document chunking, vector indexing, and prompt engineering strategies to provide relevant context to LLMs . Ensure that our AI systems can efficiently retrieve and use information from knowledge bases to enhance answer accuracy.
- Build LLM-Based Agents: Develop and refine LLM-based agentic systems that can autonomously perform complex tasks or assist users in multi-step workflows. Incorporate tools for planning, memory, and context management (e.g. long-term memory stores, tool use via APIs) to extend the capabilities of our AI agents . Experiment with emerging best practices in agent design (planning algorithms, self-healing loops, etc.) to make these agents more reliable and effective.
- Integrate with Product Teams: Work closely with product managers, designers, and other engineers to integrate AI capabilities seamlessly into our products, ensuring that features align with user needs and business goals . You’ll collaborate cross-functionally to translate product requirements into AI solutions, and iterate based on feedback and testing.
- System Evaluation & Iteration: Rigorously evaluate the performance of AI models and pipelines using appropriate metrics – including accuracy/correctness, response latency, and avoidance of errors like hallucinations . Conduct thorough testing and use user feedback to drive continuous improvements in model prompts, parameters, and data processing.
- Code Quality & Best Practices: Write clean, maintainable, and testable code while following software engineering best practices . Ensure that the AI components are well-structured, scalable, and fit into our overall system architecture. Implement monitoring and logging for AI services to track performance and reliability in production.
- Mentorship and Knowledge Sharing: Provide technical guidance and mentorship to team members on best practices in generative AI development . Help educate and upskill colleagues (e.g. through code reviews, tech talks) in areas like prompt engineering, using our AI toolchain, and evaluating model outputs. Foster a culture of continuous learning and experimentation with new AI technologies.
- Research & Innovation: Continuously explore the latest advancements in AI/ML (new model releases, libraries, techniques) and assess their potential value for our products . You will have the freedom to prototype innovative solutions – for example, trying new fine-tuning methods or integrating new APIs – and bring those into our platform if they prove beneficial. Staying current with emerging research and industry trends is a key part of this role .
Required Skills and Qualifications
- Software Engineering Experience: 3+ years (Mid-level) / 5+ years (Senior) of professional software engineering experience. Rock-solid backend development skills with expertise in Python and designing scalable APIs/services. Experience building and deploying systems on AWS or similar cloud platforms is required (including familiarity with cloud infrastructure and distributed computing) . Strong system design abilities with a track record of designing robust, maintainable architectures is a must.
- LLM/AI Application Experience: Proven experience building applications that leverage large language models or generative AI. You have spent time prompting and integrating language models into real products (e.g. building chatbots, semantic search, AI assistants) and understand their behavior and failure modes . Demonstrable projects or work in LLM-powered application development – especially using techniques like RAG or building LLM-driven agents – will make you stand out .
- AI/ML Knowledge: Prioritize applied LLM product engineering over traditional ML pipelines. Strong chops in prompt design, function calling/structured outputs, tool use, context-window management, and the RAG levers that matter (document parsing/chunking, metadata, re-ranking, embedding/model selection). Make pragmatic model/provider choices (hosted vs. open) using latency, cost, context length, safety, and rate-limit trade-offs; know when simple prompting/config changes beat fine-tuning, and when lightweight adapters or fine-tuning are justified. Design evaluation that mirrors product outcomes: golden sets, automated prompt unit tests, offline checks, and online A/Bs for helpfulness/correctness/safety; track production proxies like retrieval recall and hallucination rate. Solid understanding of embeddings, tokenization, and vector search fundamentals, plus working literacy in transformers to reason about capabilities/limits. Familiarity with agent patterns (planning, tool orchestration, memory) and guardrail/safety techniques.
- Tooling & Frameworks: Hands-on experience with the AI/LLM tech stack and libraries. This includes proficiency with LLM orchestration libraries such as LangChain, LlamaIndex, etc., for building prompt pipelines . Experience working with vector databases or semantic search (e.g. Pinecone, Chroma, Milvus) to enable retrieval-augmented generation is highly desired.
- Cloud & DevOps: Own the productionization of LLM/RAG-backed services as high-availability, low-latency backends. Expertise in AWS (e.g., ECS/EKS/Lambda, API Gateway/ALB, S3, DynamoDB/Postgres, OpenSearch, SQS/SNS/Step Functions, Secrets Manager/KMS, VPC) and infrastructure-as-code (Terraform/CDK). You’re comfortable shipping stateless APIs, event-driven pipelines, and retrieval infrastructure (vector stores, caches) with strong observability (p95/p99 latency, distributed tracing, retries/circuit breakers), security (PII handling, encryption, least-privilege IAM, private networking to model endpoints), and progressive delivery (blue/green, canary, feature flags). Build prompt/config rollout workflows, manage token/cost budgets, apply caching/batching/streaming strategies, and implement graceful fallbacks across multiple model providers.
- Product and Domain Experience: Experience building enterprise (B2B SaaS) products is a strong plus . This means you understand considerations like user experience, scalability, security, and compliance. Past exposure to these types of products will help you design AI solutions that cater to a range of end-users.
- Strong Communication & Collaboration: Excellent interpersonal and communication skills, with an ability to explain complex AI concepts to non-technical stakeholders and create clarity from ambiguity . You work effectively in cross-functional teams and can coordinate with product, design, and ops teams to drive projects forward.
- Problem-Solving & Autonomy: Self-motivated and able to manage multiple priorities in a fast-paced environment . You have a demonstrated ability to troubleshoot complex systems, debug issues across the stack, and quickly prototype solutions. A “figure it out” attitude and creative approach to overcoming technical challenges are key.
Preferred (Bonus) Qualifications
- Multi-Modal and Agents: Experience developing complex agentic systems using LLMs (for example, multi-agent systems or integrating LLMs with tool networks) is a bonus . Similarly, knowledge of multi-modal AI (combining text with vision or other data) could be useful as we expand our product capabilities.
- Startup/Agile Environment: Prior experience in an early-stage startup or similarly fast-paced environment where you’ve worn multiple hats and adapted to rapid changes . This role will involve quick iteration and evolving requirements, so comfort with ambiguity and agility is valued.
- Community/Research Involvement: Active participation in the AI community (open-source contributions, research publications, or blogging about AI advancements) is appreciated. It demonstrates passion and keeps you at the cutting edge. If you have published research or have a portfolio of AI side projects, let us know !
Perks of working at Certa.ai:
- Best-in-class compensation
- Fully-remote work with flexible schedules
- Continuous learning
- Massive opportunities for growth
- Yearly offsite
- Quarterly hacker house
- Comprehensive health coverage
- Parental Leave
- Latest Tech Workstation
- Rockstar team to work with (we mean it!)

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
-Design, build, and maintain scalable data pipelines for
structured and unstructured data sources
-Develop ETL processes to collect, clean, and transform data
from internal and external systems. Support integration of data into
dashboards, analytics tools, and reporting systems
-Collaborate with data analysts and software developers to
improve data accessibility and performance.
-Document workflows and maintain data infrastructure best
practices.
-Assist in identifying opportunities to automate repetitive data
tasks
Please send your resume to talent@springer. capital

Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
-Design, build, and maintain scalable data pipelines for
structured and unstructured data sources
-Develop ETL processes to collect, clean, and transform data
from internal and external systems. Support integration of data into
dashboards, analytics tools, and reporting systems
-Collaborate with data analysts and software developers to
improve data accessibility and performance.
-Document workflows and maintain data infrastructure best
practices.
-Assist in identifying opportunities to automate repetitive data
tasks
Please send your resume to talent@springer. capital

What will you do?
● Design, develop and maintain various Bioinformatics solutions for analyzing and managing Biological data as per our best practices and quality standards.
● Collaborate with Product Management and Software Engineers, to build bioinformatics solutions.
● Produce high quality and detailed documentation for all projects.
● Provide consultation and technical solutions to cross functional bioinformatics projects.
● Identify and/or conceive novel approaches to better and more efficiently analyze client and public datasets.
● Participate in defining the high-level application architecture for future roadmap requirements and features.
● Coach other team members by sharing domain and technical knowledge and code reviews.
● Participate in activities such as hiring and onboarding.
● Work with cross-functional teams to ensure quality throughout the bioinformatics software and analytical pipelines development lifecycle.
● Ensure compliance with our SDLC process during product development.
● Stay up-to-date on technology to deliver quality at each phase of the product life-cycle. You take leadership in evangelizing technical excellence within the team.
What do you bring to the table?
● Ph.D. degree with 3-5+ years of postdoc or industrial experience or Masters Degree with 5-8+ years of industrial experience in Bioinformatics, Computer Science, Bioengineering, Computational Biology or related field
● Excellent programming skills in Python and Shell scripting
● Experience with Relational database such as PostgreSQL, mySQL or Oracle
● Experience with version control systems such as GitHub.
● Experience with Linux/UNIX/Mac OS X based systems
● Experience with high-performance Linux cluster and cloud computing (AWS is preferred).
● Deep understanding of analytical approaches and tools for genomic data analysis along with familiarity with genomic databases. Candidates with proven expertise in the analysis of NGS data generated on sequencing platforms such as Illumina, Oxford Nanopore, or Thermo will be prioritized.
● Experience with open source bioinformatics tools and publicly available variant databases.
● Ability to manage moderately complex projects and initiatives.
● Exceptionally strong communication, data presentation and visualization skills.
● Personal initiative and ability to work effectively within a cross functional team.
● Excellent communication skills and ability to learn and work independently when necessary.
● High energy and inquisitive and strong attention to detail


About the Role:
We are looking for a skilled Full-Stack Developer with expertise in Python, JavaScript, and No-Code AI tools to join our dynamic team. The ideal candidate should be proficient in both backend and frontend development, capable of working with modern frameworks, and have experience in LLM prompt engineering, data extraction, and response formatting.
Key Responsibilities:
- Develop and maintain scalable backend services using FastAPI / Flask / Django.
- Build dynamic front-end applications using React / Next.js.
- Implement LLM-based solutions for data extraction and response formatting.
- Design and optimize databases using Milvus / Weaviate / Pinecone for vector storage and MongoDB / MySQL for structured data.
- Collaborate with cross-functional teams to deliver high-quality AI-driven applications.
- Ensure application performance, security, and scalability.
- Communicate technical ideas effectively through written and verbal communication.
Required Skills & Qualifications:
Technical Skills:
- Programming: Proficiency in Python and JavaScript.
- Backend: Experience with FastAPI / Flask / Django.
- Frontend: Strong understanding of React / Next.js.
- Database: Knowledge of at least one vector database (Milvus / Weaviate / Pinecone) and one relational or NoSQL database (MongoDB / MySQL).
- No-Code AI & LLM:
- Expertise in LLM Prompt Engineering.
- Experience with data extraction from context and response formatting.
Soft Skills:
- Strong written and verbal communication skills.
- Ability to collaborate effectively with teams and clients.
- Problem-solving mindset with a focus on innovation and efficiency.

Position: Lead Backend Engineer
Location: Remote
Experience: 10+ Years
Budget: 1.7 LPM
Employment Type: [Contract]
Required Skills & Qualifications:
- 10+ years of proven backend engineering experience.
- Strong proficiency in Python.
- Expertise in SQL (Postgres) and database optimization.
- Hands-on experience with OpenAI APIs.
- Strong command of FastAPI and microservices architecture.
- Solid knowledge of debugging, troubleshooting, and performance tuning.
Nice to Have:
- Experience with Agentic Systems or ability to quickly adopt them.
- Exposure to modern CI/CD pipelines, containerization (Docker/Kubernetes), and cloud platforms (AWS, Azure, or GCP).
About the Role
We are seeking a highly skilled Penetration Tester / Red Team Operator with proven expertise in conducting authorized security assessments. This role requires deep hands-on experience in exploitation and post-exploitation techniques, applied strictly within ethical and legally approved environments.
You will simulate advanced real-world adversaries, identify vulnerabilities, demonstrate impact through controlled exploitation, and provide actionable remediation to improve the organization’s and clients’ overall security posture.
Key Responsibilities
- Perform authorized penetration tests and red team operations to identify, validate, and document vulnerabilities.
- Conduct controlled exploitation activities that demonstrate system compromise in authorized environments.
- Execute post-exploitation techniques such as privilege escalation, lateral movement, and persistence.
- Deliver clear and detailed technical reports with practical remediation steps for both technical teams and executive stakeholders.
- Collaborate closely with security and engineering teams to strengthen organizational defenses and reduce real-world attack exposure.
Requirements
- 2–5 years of practical experience in penetration testing, red team engagements, or bug bounty programs.
- Proven track record of successful exploitation in authorized environments (e.g., achieving shell access, privilege escalation, lateral movement).
- Strong knowledge of:
- Web exploitation: injection flaws, file upload vulnerabilities, deserialization attacks.
- System exploitation: Windows & Linux privilege escalation, misconfigurations, persistence techniques.
- Network penetration: pivoting, lateral movement, and advanced network exploitation.
- Proficiency with tools such as Metasploit, Cobalt Strike, Nmap, Burp Suite, and custom payload development.
- Solid understanding of operating systems, networking fundamentals, and security controls.
- Excellent technical documentation and communication skills.
- Strong commitment to ethical, legal, and authorized testing practices.
Preferred Qualifications
- Industry certifications: OSCP, OSEP, CRTP, CRTO.
- Recognized achievements in bug bounty programs, CVEs, or published security research.
- Contributions to open-source security tools or frameworks.
- Experience with cloud penetration testing (AWS, Azure, GCP) or scripting/automation (Python, PowerShell, Bash).
Why Join Us?
At KREOVATE NUSA DIGITAL, we value skill, innovation, and ethics. Joining us means you’ll be part of a remote-first international team that thrives on solving real-world cybersecurity challenges.
- 🌍 100% Remote work with flexible hours
- 🚀 Challenging projects with direct real-world impact (not just lab simulations)
- 📈 Opportunities to grow in advanced red team operations and adversary simulation
- 🤝 Collaborative culture with global security experts
- 💡 A workplace that values continuous learning, ethical hacking, and innovation
Compensation
💰 ₹12L – ₹20L per year (~USD 14K – 23K annually)
- ESOPs available (for select roles)
- Transparent performance-based growth path
📩 How to Apply
If you’re passionate about ethical hacking, real-world security challenges, and continuous learning, we’d love to hear from you. Apply directly via this platform or connect with our hiring team.

About Us :
CLOUDSUFI, a Google Cloud Premier Partner, a Data Science and Product Engineering organization building Products and Solutions for Technology and Enterprise industries. We firmly believe in the power of data to transform businesses and make better decisions. We combine unmatched experience in business processes with cutting edge infrastructure and cloud services. We partner with our customers to monetize their data and make enterprise data dance.
Our Values :
We are a passionate and empathetic team that prioritizes human values. Our purpose is to elevate the quality of lives for our family, customers, partners and the community.
Equal Opportunity Statement :
CLOUDSUFI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. All qualified candidates receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, and national origin status. We provide equal opportunities in employment, advancement, and all other areas of our workplace.
Role: Senior Integration Engineer
Location: Remote/Delhi NCR
Experience: 4-8 years
Position Overview :
We are seeking a Senior Integration Engineer with deep expertise in building and managing integrations across Finance, ERP, and business systems. The ideal candidate will have both technical proficiency and strong business understanding, enabling them to translate finance team needs into robust, scalable, and fault-tolerant solutions.
Key Responsibilities:
- Design, develop, and maintain integrations between financial systems, ERPs, and related applications (e.g., expense management, commissions, accounting, sales)
- Gather requirements from Finance and Business stakeholders, analyze pain points, and translate them into effective integration solutions
- Build and support integrations using SOAP and REST APIs, ensuring reliability, scalability, and best practices for logging, error handling, and edge cases
- Develop, debug, and maintain workflows and automations in platforms such as Workato and Exactly Connect
- Support and troubleshoot NetSuite SuiteScript, Suiteflows, and related ERP customizations
- Write, optimize, and execute queries for Zuora (ZQL, Business Objects) and support invoice template customization (HTML)
- Implement integrations leveraging AWS (RDS, S3) and SFTP for secure and scalable data exchange
- Perform database operations and scripting using Python and JavaScript for transformation, validation, and automation tasks
- Provide functional support and debugging for finance tools such as Concur and Coupa
- Ensure integration architecture follows best practices for fault tolerance, monitoring, and maintainability
- Collaborate cross-functionally with Finance, Business, and IT teams to align technology solutions with business goals.
Qualifications:
- 3-8 years of experience in software/system integration with strong exposure to Finance and ERP systems
- Proven experience integrating ERP systems (e.g., NetSuite, Zuora, Coupa, Concur) with financial tools
- Strong understanding of finance and business processes: accounting, commissions, expense management, sales operations
- Hands-on experience with SOAP, REST APIs, Workato, AWS services, SFTP
- Working knowledge of NetSuite SuiteScript, Suiteflows, and Zuora queries (ZQL, Business Objects, invoice templates)
- Proficiency with databases, Python, JavaScript, and SQL query optimization
- Familiarity with Concur and Coupa functionality
- Strong debugging, problem-solving, and requirement-gathering skills
- Excellent communication skills and ability to work with cross-functional business teams.
Preferred Skills:
- Experience with integration design patterns and frameworks
- Exposure to CI/CD pipelines for integration deployments
- Knowledge of business and operations practices in financial systems and finance teams



We’re seeking a highly skilled, execution-focused Senior Data Scientist with a minimum of 5 years of experience. This role demands hands-on expertise in building, deploying, and optimizing machine learning models at scale, while working with big data technologies and modern cloud platforms. You will be responsible for driving data-driven solutions from experimentation to production, leveraging advanced tools and frameworks across Python, SQL, Spark, and AWS. The role requires strong technical depth, problem-solving ability, and ownership in delivering business impact through data science.
Responsibilities
- Design, build, and deploy scalable machine learning models into production systems.
- Develop advanced analytics and predictive models using Python, SQL, and popular ML/DL frameworks (Pandas, Scikit-learn, TensorFlow, PyTorch).
- Leverage Databricks, Apache Spark, and Hadoop for large-scale data processing and model training.
- Implement workflows and pipelines using Airflow and AWS EMR for automation and orchestration.
- Collaborate with engineering teams to integrate models into cloud-based applications on AWS.
- Optimize query performance, storage usage, and data pipelines for efficiency.
- Conduct end-to-end experiments, including data preprocessing, feature engineering, model training, validation, and deployment.
- Drive initiatives independently with high ownership and accountability.
- Stay up to date with industry best practices in machine learning, big data, and cloud-native deployments.
Requirements:
- Minimum 5 years of experience in Data Science or Applied Machine Learning.
- Strong proficiency in Python, SQL, and ML libraries (Pandas, Scikit-learn, TensorFlow, PyTorch).
- Proven expertise in deploying ML models into production systems.
- Experience with big data platforms (Hadoop, Spark) and distributed data processing.
- Hands-on experience with Databricks, Airflow, and AWS EMR.
- Strong knowledge of AWS cloud services (S3, Lambda, SageMaker, EC2, etc.).
- Solid understanding of query optimization, storage systems, and data pipelines.
- Excellent problem-solving skills, with the ability to design scalable solutions.
- Strong communication and collaboration skills to work in cross-functional teams.
Benefits:
- Best in class salary: We hire only the best, and we pay accordingly.
- Proximity Talks: Meet other designers, engineers, and product geeks — and learn from experts in the field.
- Keep on learning with a world-class team: Work with the best in the field, challenge yourself constantly, and learn something new every day.
About Us:
Proximity is the trusted technology, design, and consulting partner for some of the biggest Sports, Media, and Entertainment companies in the world! We’re headquartered in San Francisco and have offices in Palo Alto, Dubai, Mumbai, and Bangalore. Since 2019, Proximity has created and grown high-impact, scalable products used by 370 million daily users, with a total net worth of $45.7 billion among our client companies.
Today, we are a global team of coders, designers, product managers, geeks, and experts. We solve complex problems and build cutting-edge tech, at scale. Our team of Proxonauts is growing quickly, which means your impact on the company’s success will be huge. You’ll have the chance to work with experienced leaders who have built and led multiple tech, product, and design teams.

Job Description: Data Analyst
About the Role
We are seeking a highly skilled Data Analyst with strong expertise in SQL/PostgreSQL, Python (Pandas), Data Visualization, and Business Intelligence tools to join our team. The candidate will be responsible for analyzing large-scale datasets, identifying trends, generating actionable insights, and supporting business decisions across marketing, sales, operations, and customer experience..
Key Responsibilities
- Data Extraction & Management
- Write complex SQL queries in PostgreSQL to extract, clean, and transform large datasets.
- Ensure accuracy, reliability, and consistency of data across different platforms.
- Data Analysis & Insights
- Conduct deep-dive analyses to understand customer behavior, funnel drop-offs, product performance, campaign effectiveness, and sales trends.
- Perform cohort, LTV (lifetime value), retention, and churn analysis to identify opportunities for growth.
- Provide recommendations to improve conversion rates, average order value (AOV), and repeat purchase rates.
- Business Intelligence & Visualization
- Build and maintain interactive dashboards and reports using BI tools (e.g., PowerBI, Metabase or Looker).
- Create visualizations that simplify complex datasets for stakeholders and management.
- Python (Pandas)
- Use Python (Pandas, NumPy) for advanced analytics.
- Collaboration & Stakeholder Management
- Work closely with product, operations, and leadership teams to provide insights that drive decision-making.
- Communicate findings in a clear, concise, and actionable manner to both technical and non-technical stakeholders.
Required Skills
- SQL/PostgreSQL
- Complex joins, window functions, CTEs, aggregations, query optimization.
- Python (Pandas & Analytics)
- Data wrangling, cleaning, transformations, exploratory data analysis (EDA).
- Libraries: Pandas, NumPy, Matplotlib, Seaborn
- Data Visualization & BI Tools
- Expertise in creating dashboards and reports using Metabase or Looker.
- Ability to translate raw data into meaningful visual insights.
- Business Intelligence
- Strong analytical reasoning to connect data insights with e-commerce KPIs.
- Experience in funnel analysis, customer journey mapping, and retention analysis.
- Analytics & E-commerce Knowledge
- Understanding of metrics like CAC, ROAS, LTV, churn, contribution margin.
- General Skills
- Strong communication and presentation skills.
- Ability to work cross-functionally in fast-paced environments.
- Problem-solving mindset with attention to detail.
Education: Bachelor’s degree in Data Science, Computer Science, data processing



Be a part of the team building the future of healthcare!
About the company
Firefly Health is building a revolutionary new type of comprehensive health care and coverage, powered by a relationship-driven care team, a trusted virtual and in-person clinical network, and our proprietary technology platform.
Founded by experienced clinicians and technology leaders, Firefly Health is on a mission to deliver clinical and financial health through joyful, always-there care. We are flipping the script on what it means to be a health plan and actually providing a true health benefit to members.
We are intensely focused on optimizing the physical + mental + financial wellbeing of those who want (and deserve) something better than the status quo. If you are ready to roll up your sleeves and take on our audacious mission, we would love to hear from you.
About the Role
We are looking for a Senior Software Engineer who thrives in a high growth environment to join our core engineering team. This team works side-by-side with our clinical, operations, and product teams to change the experience of healthcare.
The engineering team’s goal is to create an integrated, intelligent platform that proactively manages healthcare at scale and makes high-quality benefits accessible and engaging for all. You’ll help power a platform that not only serves members, providers, and employers, but also leverages advanced technologies to drive efficiency and better outcomes.
Our tech stack includes React, React Native, Typescript, Python, Django, Postgres, Docker, and AWS. It’s a great modern foundation for tackling the challenges ahead of us.
You Will
- Partner with our clinical, operations, and product teams to design and build technology underlying our alternate health plan and care experiences
- Build beautiful mobile apps, web applications, and APIs that make high-quality health benefits and world class care accessible and engaging for members
- Integrate with third-party vendors and health tech innovations to expand our platform’s functionality, data sources, and member services
- Work on a member-facing platform and clinical intelligence platform used by our care teams to deliver efficient, quality care
- Own projects from ideation to release, influencing both product and technical decisions
- Contribute to technical architecture and best practices across the team
You’d Be a Good Fit If
- You have 5+ years of experience writing production code across the stack
- You resonate with our mission: to deliver clinical and financial health through joyful, always there, care
- You love learning and unstructured problem-solving
- You enjoy and take ownership in designing, building, and shipping features end-to-end
- You approach challenges with humility and resilience
- You strive to build collaborative engineering cultures through thoughtful code review, documentation, mentorship, and teamwork
Nice to Haves
- Experience with the specific technologies in our stack
- Experience working with machine learning models, agentic workflows, or automation in the healthcare space
- Experience overcoming the messiness of healthcare data
- Experience working with data-exchange and standardizing disparate data feeds

About the Job
Mid-Level Backend Developer
Experience: 3–5 Years
Salary: Competitive
Preferred Notice Period: Immediate to 30 Days
Opportunity Type: Remote (Global)
Placement Type: Freelance/Contract
(Note: This is a requirement for one of TalentLo’s Clients)
Role Overview Description
We’re seeking experienced backend developers with 3–5 years of professional experience to contribute to backend architecture, API development, and system optimization. This is a freelance/contract opportunity where you’ll work remotely with global clients on impactful projects.
.
Responsibilities
- Design, develop, and maintain backend systems ensuring scalability and performance
- Build, document, and maintain RESTful APIs and integrations with third-party services
- Optimize database queries, schemas, and indexing for efficiency
- Implement and enforce security best practices across backend systems
- Troubleshoot, debug, and resolve technical issues in backend pipelines
- Support deployment processes and contribute to CI/CD and DevOps workflows
- Collaborate with front-end developers, product teams, and other stakeholders to deliver features effectively
Requirements:
- 3-5 years developing backend systems
- Proficiency in one or more languages (Node.js, Python, Java, C#, PHP)
- Strong database design and optimization skills
- API development and documentation experience
- Understanding of security best practices
- Experience with deployment and basic DevOps
How to Apply
- Create your profile on TalentLo’s platform (https://www.talentlo.com/signup)
- Submit your resume, GitHub, or portfolio
- Get shortlisted & connect with the client
About TalentLo
TalentLo is a revolutionary freelance platform connecting exceptional tech talent with high-quality clients worldwide. We’re building a select pool of talented freelancers to match with companies actively seeking specialized freelance expertise.
✨ If you’re ready to work on exciting backend projects, collaborate with global teams, and take your career to the next level — apply today!

We are seeking an experienced and hands-on Senior Developer to play a critical role in the evolution of our platform. This role will take primary responsibility for designing and implementing a range of core system and feature capabilities and will be expected to contribute to the overall information model, software architecture, and coding standards of the product. The position is designed to alleviate the hands-on coding demands on our current VP of R&D.
You will work closely with our existing engineering leadership and cross-functional teams to ensure scalable, maintainable, and high-quality code delivery. You will also actively contribute to development work, assist with critical PR reviews, and mentor team members. From time to time, you may be called upon to deploy infrastructure and software on Azure via Terraform and Ansible.
This role requires deep expertise in Python-based software architecture, strong data engineering and machine learning understanding, and a demonstrated ability to align technical execution with business goals.
Responsibilities
Architecture & Technical Leadership
- Own and develop a variety of core system capabilities and reporting features.
- Define and enforce coding standards, design patterns, and best practices across the development team.
- Perform rigorous code reviews to ensure high-quality, maintainable, and scalable codebases.
- Work closely with the VP of R&D and cross-functional teams to ensure code developed by the team aligns with Oii architecture, product strategy, and long-term scalability.
Development & Code Quality
- Contribute hands-on to critical components of the codebase.
- Guide the team in balancing technical debt, scalability, and delivery speed.
- Maintain high standards in code versioning, testing, documentation, and deployment.
Collaboration & Mentorship
- Act as a technical advisor to the product and leadership teams.
- Mentor and support the development team, helping to upskill engineers in architecture, design patterns, and problem-solving.
- Proactively identify potential risks and propose mitigation strategies.
Infrastructure & Deployment
- Oversee infrastructure as code practices using Terraform and Azure.
- Maintain robust CI/CD pipelines leveraging GitHub workflows.
- Ensure efficient and reliable deployment processes.
Requirements
- 8+ years of professional software development experience.
- Expert-level Python programming skills.
- Deep experience with:
- GraphQL API design and development.
- Alembic for database migrations.
- Poetry for dependency and environment management.
- GitHub workflows, CI/CD deployment pipelines, and PR processes.
- Azure cloud services, Terraform, and Ansible for infrastructure deployment.
- Strong understanding of machine learning workflows, data engineering pipelines, and production ML operations.
- Experience designing scalable distributed systems.
- Ability to critically assess and review code to maintain architectural integrity.
- Excellent written and verbal communication skills.
Additional Experience (Nice to Have)
- Experience with AI agent frameworks or agentic tooling.
- Prior experience in supply chain, manufacturing, or industrial SaaS platforms.
- Experience contributing to SaaS platform security and compliance practices.
- Prior experience in fast-growth startups or scaling technology organizations.

Job Title: AI Architecture Intern
Company: PGAGI Consultancy Pvt. Ltd.
Location: Remote
Employment Type: Internship
Position Overview
We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Key Responsibilities:
- AI System Architecture Design: Collaborate with the technical team to design robust, scalable, and high-performance AI system architectures aligned with client requirements.
- Client-Focused Solutions: Analyze and interpret client needs to ensure architectural solutions meet expectations while introducing innovation and efficiency.
- Methodology Development: Assist in the formulation and implementation of best practices, methodologies, and frameworks for sustainable AI system development.
- Technology Stack Selection: Support the evaluation and selection of appropriate tools, technologies, and frameworks tailored to project objectives and future scalability.
- Team Collaboration & Learning: Work alongside experienced AI professionals, contributing to projects while enhancing your knowledge through hands-on involvement.
Requirements:
- Strong understanding of AI concepts, machine learning algorithms, and data structures.
- Familiarity with AI development frameworks (e.g., TensorFlow, PyTorch, Keras).
- Proficiency in programming languages such as Python, Java, or C++.
- Demonstrated interest in system architecture, design thinking, and scalable solutions.
- Up-to-date knowledge of AI trends, tools, and technologies.
- Ability to work independently and collaboratively in a remote team environment
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
After completion of the internship period, there is a chance to get a full-time opportunity as an AI/ML engineer (Up to 12 LPA).
Preferred Experience:
- Prior experience in roles such as AI Solution Architect, ML Architect, Data Science Architect, or AI/ML intern.
- Exposure to AI-driven startups or fast-paced technology environments.
- Proven ability to operate in dynamic roles requiring agility, adaptability, and initiative.
Digital and Web Analytics Engineer
Location- Remote
Experience Range- 5-8 years
Job Type- Contractual role for 1 year
Desired Skills: Adobe Customer Journey Analytics, Adobe Analytics, Adobe Launch, Adobe Target, Google Analytics, Google Tag Manager, Tealium, Snowplow. Strong programming skills in JavaScript, HTML/CSS, Python, SQL
Job Description:
Required Skills & Experience -
● Proven expertise (5+ years) implementing and managing large-scale web analytics using tools such as: Adobe Customer Journey Analytics, Adobe Analytics, Adobe Launch, Adobe Target, Google Analytics, Google Tag Manager, Tealium, Snowplow, etc.
● Expertise in Adobe Analytics -> Adobe Customer Journey Analytics Migrations, and migration from app measurement framework to Web SDK
● Working knowledge of:
● Adobe Launch (Tag Management)
● Adobe Analytics & Customer Journey Analytics
● Major advertising platforms: Google, Facebook, LinkedIn, Epsilon
● Experience integrating and supporting MarTech/ad-tech platforms (6sense, Adobe Target, etc.), and rapidly onboarding new vendor solutions.
● Demonstrable ability to build and maintain dashboards/reports in Adobe Analytics, Adobe CJA and Tableau.
● Strong experience with hit-level querying in Adobe Customer Journey Analytics and customer-level data analysis.
● Hands-on experience with Event Driven DataLayer development (with and without tag management).
● Experience with tag deployment based on consent frameworks, specifically OneTrust.
● Strong analytical, project management, and written/verbal communication skills
● Strong programming skills in JavaScript, HTML/CSS, Python, SQL
● Familiarity with HTML, variable mapping, cookies, server call types, link tracking, identity management, packet sniffing.
● Ability and willingness to work independently across all phases of the analytics implementation lifecycle.
● Ability to work within an Agile development framework and communicate across non-technical and technical stakeholder groups
Roles and Responsibility -
● Support demand generation teams with new reporting and third-party tracking requests (across media vendors such as Google, Facebook, LinkedIn, Epsilon, etc.).
● Implement and update Adobe CJA and Analytics/tracking scripts, variables, and integrations using Adobe Launch, ensuring accurate mapping, server call types, cookies, data Layers, and identity management across web properties.
● Quickly onboard and integrate new ad-tech and third-party vendors (e.g., 6sense, Adobe Target, LinkedIn, Facebook, Google, etc.).
● Develop and maintain dashboards and reports in Adobe Analytics and Tableau to provide actionable insights for marketing stakeholders.
● Monitor and ensure data accuracy and integrity across our analytics platforms via ongoing QA, automation scripts, and outage detection.
● Support end-to-end QA cycles: completing user acceptance testing (UAT), validating data integrity, and confirming reporting accuracy on new and updated implementations.
● Conduct ad-hoc analyses (in Adobe CJA, Adobe Target, Adobe Analytics, Snowflake, and Tableau) to support marketing and product teams.
● Maintain integration with OneTrust to ensure analytics tags are deployed only after proper visitor consent is granted.
● Advise on data privacy best practices: Stay up-to-date with GDPR/privacy regulation changes and ensure ongoing compliance.
● Collaborate within an Agile framework and communicate cross-functionally with teams across the enterprise.

Position Responsibilities
Software Engineer would be part of the Deltek Integration Center of Excellence team. The role would involve developing integration and automation solutions per outlined business requirements.
- Design & Develop integration and automation solutions based on technical specifications.
- Support in testing activities, including integration testing, end-to-end (business process) testing, and UAT.
- Being aware of CI-CD, engineering best practices, and SDLC process
- Should have a very good understanding of all existing integration and automation.
- Be a Subject Matter Expert with the ability to discover and demonstrate how integration solutions can effectively help companies with their business automation needs across a broad set of industries
- Should delivery of project work based on stipulated project timelines
- Ability to communicate technically complex ideas to a non-technical audience.
- Should maintain quality and adhere to SLA, process & security policies
- Collaborate with internal teams during triage and resolution of integration and automation issues
Qualifications
- Ambitious, Tech support expert with a Bachelor's Degree
- A minimum of 2-4 years of experience in developing integration/automation solutions or related experience
- Knowledge of any programming languages. Python would be a plus
- Good understanding of integration concepts, methodologies, and technologies
- Ability to learn new concepts, and technologies and solve problems
- Good communication and presentation skills
- Strong interpersonal skills with the ability to convey and relate ideas to others and work collaboratively to get things done
- Comfortable with interpreting API documentation of cloud-based applications, know basic programming logic and SQL
- Must be knowledgeable and use of AI to help streamline tasks.

Role Overview
Python Developer is responsible for building, debugging, and implementing application projects using the Python programming language and developing program specifications and coded modules according to specifications and client standards.
Responsibilities
- Advanced Python Programming: Extensive experience in Python, with a deep understanding of Python principles, design patterns, and best practices. Proficiency in developing scalable and efficient Python code, with a focus on automation, data processing, and backend services. Demonstrated ability with automation libraries like PyAutoGUI for GUI automation tasks, enabling the automation of mouse and keyboard actions.
- Experience with Selenium for web automation: Capable of automating web browsers to mimic user actions, scrape web data, and test web applications.
- Python Frameworks and Libraries: Strong experience with popular Python frameworks and libraries relevant to data processing, web application development, and automation, such as Flask or Django for web development, Pandas and NumPy for data manipulation, and Celery for task scheduling.
- SQL Server Expertise: Advanced knowledge of SQL Server management and development.
- API Development and Integration: Experience in developing and consuming APIs. Understanding of API security best practices. Familiarity with integrating third-party services and APIs into the existing ecosystem.
- Version Control and CI/CD: Proficiency in using version control systems, such as Git. Experience with continuous integration and continuous deployment (CI/CD) pipelines.
- Unit Testing and Debugging: Strong understanding of testing practices, including unit testing, integration testing. Experience with Python testing. Skilled in debugging and performance profiling.
- Containerization and Virtualization: Familiarity with containerization and orchestration tools, such as Docker and Kubernetes, to enhance application deployment and scalability.
Requirements & Skills
- Analytical Thinking: Ability to analyze complex problems and break them down into manageable parts. Strong logical reasoning and troubleshooting skills.
- Communication: Excellent verbal and written communication skills. Ability to effectively articulate technical challenges and solutions to both technical and non-technical team members.
- Team Collaboration: Experience working in agile development environments. Ability to work collaboratively in cross-functional teams and with stakeholders from different backgrounds.
- Continuous Learning: A strong desire to learn new technologies and frameworks. Keeping up to date with industry trends and advancements in healthcare RCM, AI, and automation technologies.
Additional Preferred Skills
- Familiarity with healthcare industry standards and regulations, such as HIPAA, is highly advantageous.
- Understanding of healthcare revenue cycle management processes and challenges.
- Experience with healthcare data formats and standards (e.g., HL7, FHIR) is beneficial.
Educational Qualifications
- Bachelor’s degree in a related field
- Relevant technical certifications are a plus

🚀 We’re Hiring – Senior Trader / Portfolio Manager (Remote, Global)
An innovative quantitative trading and investment firm in the digital assets space is looking for an experienced Senior Trader / Portfolio Manager with expertise in high-frequency trading to join their team.
💼 Role Highlights:
*Research, test, and implement quantitative trading strategies
*Analyze market data using advanced statistical tools
*Optimize and improve existing automated strategies
*Collaborate with developers to enhance trading systems
*Provide mentorship and guidance to traders
🎯 We’re Looking For:
*4–8 years of high-frequency trading experience (crypto HFT a plus)
*Expertise in market-making and delta-one products
*Proven track record of running profitable strategies
*Strong skills in C++ or similar programming languages
*Competitive, ownership-driven mindset
🌍 Why Join?
*Fully remote, global role
*Flat hierarchy, full ownership of work
*Fast-paced environment with rapid idea implementation
*Market-competitive pay + profit sharing + equity
*Perks: medical insurance, team outings, off-sites
If you’re passionate about building cutting-edge quantitative trading strategies and thriving in a high-performance environment, we’d love to hear from you!

Job Title : Backend / API Developer - Python (FastAPI) / Node.js (Express)
Location : Remote
Experience : 4+ Years
Job Description :
We are looking for a skilled Backend / API Developer - Python (FastAPI) / Node.js (Express) with strong expertise in building secure, scalable, and reliable backend systems. The ideal candidate should be proficient in Python (FastAPI preferred) or Node.js (Express) and have hands-on experience deploying applications to serverless environments.
Key Responsibilities :
- Design, develop, and maintain RESTful APIs and backend services.
- Deploy and manage serverless applications on Cloudflare Workers, Firebase Functions, and Google Cloud Functions.
- Work with Google Cloud services including Cloud Run, Cloud Functions, Secret Manager, and IAM roles.
- Implement secure API development practices (HTTPS, input validation, and secrets management).
- Ensure performance optimization, scalability, and reliability of backend systems.
- Collaborate with front-end developers, DevOps, and product teams to deliver high-quality solutions.
Mandatory Skills :
Python (FastAPI) / Node.js (Express), Serverless Deployment (Cloudflare Workers, Firebase, GCP Functions), Google Cloud Services (Cloud Run, IAM, Secret Manager), API Security (HTTPS, Input Validation, Secrets Management).
Required Skills :
- Proficiency in Python (FastAPI preferred) or Node.js (Express).
- Hands-on experience with serverless platforms (Cloudflare Workers, Firebase Functions, GCP Functions).
- Familiarity with Google Cloud services (Cloud Run, IAM, Secret Manager, Cloud Functions).
- Strong understanding of secure API development (HTTPS, input validation, API keys & secret management).
- Knowledge of API design principles and best practices.
- Ability to work with CI/CD pipelines and modern development workflows.
Preferred Qualifications :
- Strong knowledge of microservices architecture.
- Experience with CI/CD pipelines.
- Knowledge of containerization (Docker, Kubernetes).
- Familiarity with monitoring and logging tools.

🚀 We’re Hiring: Quantitative Developer (Remote) 📊
We’re looking for a passionate Quant Developer to join our dynamic, innovation-driven team.
If you love numbers, algorithms, and building trading strategies that blend AI with quantitative research, this is your chance to work on live trading systems with cutting-edge tools.
🔑 Key Responsibilities:
*Develop & backtest algorithmic trading strategies using historical market data.
*Conduct quantitative research & build predictive models.
*Automate data pipelines for efficient strategy development.
*Optimize risk & performance metrics for robust execution.
*Collaborate with AI teams to integrate ML models into trading systems.
💡 What We’re Looking For:
*Strong foundation in mathematics, statistics, and probability.
*Proficiency in Python (NumPy, Pandas, Matplotlib, SciPy); Pine Script is a plus.
*Knowledge of trading systems & market microstructure.
*Experience with backtesting frameworks (Backtrader, Zipline, QuantConnect).
*Analytical mindset with strong problem-solving skills.
✨ Preferred Qualifications:
*Experience with broker APIs (e.g., Zerodha, Interactive Brokers).
*SQL/NoSQL or cloud-based data platform experience.
*Exposure to ML models in financial forecasting.
🌟 Why Join Us?
*Work on live AI-powered trading strategies.
*Learn from industry experts & get ongoing mentorship.
*Enjoy the flexibility of remote work.
Be part of a fast-paced, innovation-first environment.

Job Title: IT and Cybersecurity Network Backend Engineer (Remote)
Job Summary:
Join Springer Capital’s elite tech team to architect and fortify our digital infrastructure, ensuring robust, secure, and scalable backend systems that power cutting‑edge investment solutions.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm that redefines financial strategies through innovative digital solutions. We identify high-potential opportunities and leverage advanced technology to drive value, transforming traditional investment paradigms. Our culture is built on agility, creative problem-solving, and a relentless pursuit of excellence.
Job Highlights:
As an IT and Cybersecurity Network Backend Engineer, you will play a central role in designing, developing, and securing our backend systems. You’ll be responsible for creating bulletproof server architectures and integrating sophisticated cybersecurity measures to ensure our digital assets remain secure, reliable, and scalable—all while working fully remotely.
Responsibilities:
- Backend Architecture & Security:
- Design, develop, and maintain high-performance backend systems and RESTful APIs using technologies such as Python, Node.js, or Java.
- Implement advanced cybersecurity protocols including encryption, multi-factor authentication, and anomaly detection to safeguard our infrastructure.
- Network Infrastructure Management:
- Architect secure cloud and hybrid network solutions to protect sensitive data and ensure uninterrupted service.
- Develop robust logging, monitoring, and compliance mechanisms.
- Collaborative Innovation:
- Partner with cross-functional teams (DevOps, frontend, and product managers) to integrate security seamlessly into every layer of our technology stack.
- Participate in regular security audits, agile sprints, and technical reviews.
- Continuous Improvement:
- Keep abreast of emerging technologies and cybersecurity threats, proposing and implementing innovative solutions to maintain system integrity.
What We Offer:
- Advanced Learning & Mentorship: Work side-by-side with industry experts who will empower you to push the boundaries of cybersecurity and backend engineering.
- Impactful Work: Engage in projects that directly influence the security and scalability of our revolutionary digital investment strategies.
- Dynamic, Remote Culture: Thrive in a flexible, remote-first environment that champions creativity, collaboration, and work-life balance.
- Career Growth: Unlock long-term career advancement opportunities in a forward-thinking organization that values innovation and initiative.
Requirements:
- Degree (or current enrollment) in Computer Science, Cybersecurity, or a related field.
- Proficiency in at least one backend programming language (Python, Node.js, or Java) and hands-on experience with RESTful API design.
- Solid understanding of network security principles and experience implementing cybersecurity best practices.
- Passionate about designing secure systems, solving complex technical challenges, and staying ahead of industry trends.
- Strong analytical and communication skills, with the ability to work effectively in a collaborative, fast-paced environment.
About Springer Capital:
At Springer Capital, we blend financial expertise with digital innovation to shape tomorrow’s investment landscape. Our relentless drive to merge technology and asset management has positioned us as leaders in transforming traditional finance into dynamic, tech-enabled ventures.
Location: Global (Remote)
Job Type: Full-time
Pay: $50 USD per month
Work Location: Remote
Embark on your next challenge with Springer Capital—where your technical prowess and dedication to security help safeguard the future of digital investments.

Description
Job Description:
Company: Springer Capital
Type: Internship (Remote, Part-Time/Full-Time)
Duration: 3–6 months
Start Date: Rolling
Compensation:
About the role:
We’re building high-performance backend systems that power our financial and ESG intelligence platforms and we want you on the team. As a Backend Engineering Intern, you’ll help us develop scalable APIs, automate data pipelines, and deploy secure cloud infrastructure. This is your chance to work alongside experienced engineers, contribute to real products, and see your code go live.
What You'll Work On:
As a Backend Engineering Intern, you’ll be shaping the systems that power financial insights.
Engineering scalable backend services in Python, Node.js, or Go
Designing and integrating RESTful APIs and microservices
Working with PostgreSQL, MongoDB, or Redis for data persistence
Deploying on AWS/GCP, using Docker, and learning Kubernetes on the fly
Automating infrastructure and shipping faster with CI/CD pipelines
Collaborating with a product-focused team that values fast iteration
What We’re Looking For:
A builder mindset – you like writing clean, efficient code that works
Strong grasp of backend languages (Python, Java, Node, etc.)
Understanding of cloud platforms and containerization basics
Basic knowledge of databases and version control
Students or self-taught engineers actively learning and building
Preferred skills:
Experience with serverless or event-driven architectures
Familiarity with DevOps tools or monitoring systems
A curious mind for AI/ML, fintech, or real-time analytics
What You’ll Get:
Real-world experience solving core backend problems
Autonomy and ownership of live features
Mentorship from engineers who’ve built at top-tier startups
A chance to grow into a full-time offer




Job description
Job Title: AI-Driven Data Science Automation Intern – Machine Learning Research Specialist
Location: Remote (Global)
Compensation: $50 USD per month
Company: Meta2 Labs
www.meta2labs.com
About Meta2 Labs:
Meta2 Labs is a next-gen innovation studio building products, platforms, and experiences at the convergence of AI, Web3, and immersive technologies. We are a lean, mission-driven collective of creators, engineers, designers, and futurists working to shape the internet of tomorrow. We believe the next wave of value will come from decentralized, intelligent, and user-owned digital ecosystems—and we’re building toward that vision.
As we scale our roadmap and ecosystem, we're looking for a driven, aligned, and entrepreneurial AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join us on this journey.
The Opportunity:
We’re seeking a part-time AI-Driven Data Science Automation Intern – Machine Learning Research Specialist to join Meta2 Labs at a critical early stage. This is a high-impact role designed for someone who shares our vision and wants to actively shape the future of tech. You’ll be an equal voice at the table and help drive the direction of our ventures, partnerships, and product strategies.
Responsibilities:
- Collaborate on the vision, strategy, and execution across Meta2 Labs' portfolio and initiatives.
- Drive innovation in areas such as AI applications, Web3 infrastructure, and experiential product design.
- Contribute to go-to-market strategies, business development, and partnership opportunities.
- Help shape company culture, structure, and team expansion.
- Be a thought partner and problem-solver in all key strategic discussions.
- Lead or support verticals based on your domain expertise (e.g., product, technology, growth, design, etc.).
- Act as a representative and evangelist for Meta2 Labs in public or partner-facing contexts.
Ideal Profile:
- Passion for emerging technologies (AI, Web3, XR, etc.).
- Comfortable operating in ambiguity and working lean.
- Strong strategic thinking, communication, and collaboration skills.
- Open to wearing multiple hats and learning as you build.
- Driven by purpose and eager to gain experience in a cutting-edge tech environment.
Commitment:
- Flexible, part-time involvement.
- Remote-first and async-friendly culture.
Why Join Meta2 Labs:
- Join a purpose-led studio at the frontier of tech innovation.
- Help build impactful ventures with real-world value and long-term potential.
- Shape your own role, focus, and future within a decentralized, founder-friendly structure.
- Be part of a collaborative, intellectually curious, and builder-centric culture.
Job Types: Part-time, Internship
Pay: $50 USD per month
Work Location: Remote
Job Types: Full-time, Part-time, Internship
Contract length: 3 months
Pay: Up to ₹5,000.00 per month
Benefits:
- Flexible schedule
- Health insurance
- Work from home
Work Location: Remote


Backend Engineering Intern (Infrastructure Software) – Remote
Position Type: Internship (Full-Time or Part-Time)
Location: Remote
Duration: 12 weeks
Compensation: Unpaid (***3000 INR is just a placeholder***)
About the Role
We are seeking a motivated Backend Developer Intern to join our engineering team and contribute to building scalable, efficient, and secure backend services. This internship offers hands-on experience in API development, database management, and backend architecture, with guidance from experienced developers. You will work closely with cross-functional teams to deliver features that power our applications and improve user experience.
Responsibilities
- Assist in designing, developing, and maintaining backend services, APIs, and integrations.
- Collaborate with frontend engineers to support application functionality and data flow.
- Write clean, efficient, and well-documented code.
- Support database design, optimization, and query performance improvements.
- Participate in code reviews, debugging, and troubleshooting production issues.
- Assist with unit testing, integration testing, and ensuring system reliability.
- Work with cloud-based environments (e.g., AWS, Azure, GCP) to deploy and manage backend systems.
Requirements
- Currently pursuing or recently completed a degree in Computer Science, Software Engineering, or related field.
- Familiarity with one or more backend languages/frameworks (e.g., Node.js, Python/Django, Java/Spring Boot, Ruby on Rails).
- Understanding of RESTful APIs and/or GraphQL.
- Basic knowledge of relational and/or NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB).
- Familiarity with version control (Git/GitHub).
- Strong problem-solving skills and attention to detail.
- Ability to work independently in a remote, collaborative environment.
Preferred Skills (Nice to Have)
- Experience with cloud services (AWS Lambda, S3, EC2, etc.).
- Familiarity with containerization (Docker) and CI/CD pipelines.
- Basic understanding of authentication and authorization (OAuth, JWT).
- Interest in backend performance optimization and scalability.
What You’ll Gain
- Hands-on experience building backend systems for real-world applications.
- Exposure to industry-standard tools, workflows, and coding practices.
- Mentorship from experienced backend engineers.
- Opportunity to contribute to live projects impacting end users.


About the Role
As a Senior Analyst at JustAnswer, you will be a Subject Matter Expert (SME) on Chatbot Strategy, driving long-term growth and incorporating the newest technologies like LLM.
In this role, you will deliver tangible business impact by providing high-quality insights and recommendations, combining strategic thinking and problem-solving with detailed analysis.
This position offers a unique opportunity to collaborate closely with Product Managers and cross-functional teams, uncover valuable business insights, devise optimization strategies, and validate them through experiments.
What You’ll Do
- Collaborate with Product and Analytics leadership to conceive and structure analysis, delivering highly actionable insights from “deep dives” into specific business areas.
- Analyze large volumes of internal & external data to identify growth and optimization opportunities.
- Package and communicate findings and recommendations to a broad audience, including senior leadership.
- Perform both Descriptive & Prescriptive Analytics, including experimentations (A/B, MAB), and build reporting to track trends.
- Perform advanced modeling (NLP, Text Mining) – preferable.
- Implement and track business metrics to help drive the business.
- Contribute to growth strategy from a marketing and operations perspective.
- Operate independently as a lead analyst to understand the JA audience and guide strategic decisions & executions.
What We’re Looking For
- 5+ years of experience in e-commerce/customer experience products.
- Proficiency in analysis and business modeling using Excel.
- Experience with Google Analytics, BigQuery, Google Ads, PowerBI, and Python / R is a plus.
- Strong SQL skills with the ability to write complex queries.
- Expertise in Descriptive and Inferential Statistical Analysis.
- Strong experience in setting up and analyzing A/B Testing or Hypothesis Testing.
- Ability to translate analytical results into actionable business recommendations.
- Excellent written and verbal communication skills; ability to communicate with all levels of management.
- Advanced English proficiency.
- App-related experience – preferable.
About Us
We are a San Francisco-based company founded in 2003 with a simple mission: we help people. We have democratized professional services by connecting customers with verified Experts who provide reliable answers anytime, on any budget.
- 12,000+ Experts across various domains (doctors, lawyers, tech support, mechanics, vets, home repair, and more).
- 10 million customers in 196 countries.
- 16 million+ questions answered in 20 years.
- Investors include Charles Schwab, Crosslink Capital, and Glynn Capital Management.
Why Join the Team
- 1,000+ employees and growing rapidly.
- Hiring criteria: Smart. Fun. Get things done.
- Profitable and fast-growing company.
- We love what we do and celebrate success together.
Our JustAnswer Promise
We strive together to make the world a better place, one answer at a time.
Our values (“The JA Way”):
- Data Driven: Data decides, not egos.
- Courageous: We take risks and challenge the status quo.
- Innovative: Constantly learning, creating, and adapting.
- Lean: Customer-focused, using lean testing to learn and improve.
- Humble: Past success is not a guarantee of future success.
Work Environment
- Remote-first/hybrid model in most locations.
- Optional in-person meetings for collaboration and social events.
- Employee well-being is a top priority.
- Where legally permissible, employees must be fully vaccinated against Covid-19 to attend in-person events.
Our Commitment to Diversity
We embrace workplace diversity, believing it drives richer insights, fuels innovation, and creates better outcomes.
We are committed to attracting and developing an inclusive workforce. Individuals seeking opportunities are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable laws.


About the Role
As an Analyst at JustAnswer, you will be a Subject Matter Expert (SME) on Chatbot Strategy, driving long-term growth and incorporating the newest technologies like LLM.
In this role, you will deliver tangible business impact by providing high-quality insights and recommendations, combining strategic thinking and problem-solving with detailed analysis.
This position offers a unique opportunity to collaborate closely with Product Managers and cross-functional teams, uncover valuable business insights, devise optimization strategies, and validate them through experiments.
What You’ll Do
- Collaborate with Product and Analytics leadership to conceive and structure analysis, delivering highly actionable insights from “deep dives” into specific business areas.
- Analyze large volumes of internal & external data to identify growth and optimization opportunities.
- Package and communicate findings and recommendations to a broad audience, including senior leadership.
- Perform both Descriptive & Prescriptive Analytics, including experimentations (A/B, MAB), and build reporting to track trends.
- Perform advanced modeling (NLP, Text Mining) – preferable.
- Implement and track business metrics to help drive the business.
- Contribute to growth strategy from a marketing and operations perspective.
- Operate independently as a lead analyst to understand the JA audience and guide strategic decisions & executions.
What We’re Looking For
- 5+ years of experience in e-commerce/customer experience products.
- Proficiency in analysis and business modeling using Excel.
- Experience with Google Analytics, BigQuery, Google Ads, PowerBI, and Python / R is a plus.
- Strong SQL skills with the ability to write complex queries.
- Expertise in Descriptive and Inferential Statistical Analysis.
- Strong experience in setting up and analyzing A/B Testing or Hypothesis Testing.
- Ability to translate analytical results into actionable business recommendations.
- Excellent written and verbal communication skills; ability to communicate with all levels of management.
- Advanced English proficiency.
- App-related experience – preferable.
About Us
We are a San Francisco-based company founded in 2003 with a simple mission: we help people. We have democratized professional services by connecting customers with verified Experts who provide reliable answers anytime, on any budget.
- 12,000+ Experts across various domains (doctors, lawyers, tech support, mechanics, vets, home repair, and more).
- 10 million customers in 196 countries.
- 16 million+ questions answered in 20 years.
- Investors include Charles Schwab, Crosslink Capital, and Glynn Capital Management.
Why Join the Team
- 1,000+ employees and growing rapidly.
- Hiring criteria: Smart. Fun. Get things done.
- Profitable and fast-growing company.
- We love what we do and celebrate success together.
Our JustAnswer Promise
We strive together to make the world a better place, one answer at a time.
Our values (“The JA Way”):
- Data Driven: Data decides, not egos.
- Courageous: We take risks and challenge the status quo.
- Innovative: Constantly learning, creating, and adapting.
- Lean: Customer-focused, using lean testing to learn and improve.
- Humble: Past success is not a guarantee of future success.
Work Environment
- Remote-first/hybrid model in most locations.
- Optional in-person meetings for collaboration and social events.
- Employee well-being is a top priority.
- Where legally permissible, employees must be fully vaccinated against Covid-19 to attend in-person events.
Our Commitment to Diversity
We embrace workplace diversity, believing it drives richer insights, fuels innovation, and creates better outcomes.
We are committed to attracting and developing an inclusive workforce. Individuals seeking opportunities are considered without regard to race, color, religion, national origin, age, sex, marital status, ancestry, disability, veteran status, sexual orientation, gender identity, or any other protected status under applicable laws.
About HelloRamp.ai
HelloRamp is on a mission to revolutionize media creation for automotive and retail using AI. Our platform powers 3D/AR experiences for leading brands like Cars24, Spinny, and Samsung. We’re now building the next generation of Computer Vision + AI products, including cutting-edge NeRF pipelines and AI-driven video generation.
What You’ll Work On
- Develop and optimize Computer Vision pipelines for large-scale media creation.
- Implement NeRF-based systems for high-quality 3D reconstruction.
- Build and fine-tune AI video generation models using state-of-the-art techniques.
- Optimize AI inference for production (CUDA, TensorRT, ONNX).
- Collaborate with the engineering team to integrate AI features into scalable cloud systems.
- Research latest AI/CV advancements and bring them into production.
Skills & Experience
- Strong Python programming skills.
- Deep expertise in Computer Vision and Machine Learning.
- Hands-on with PyTorch/TensorFlow.
- Experience with NeRF frameworks (Instant-NGP, Nerfstudio, Plenoxels) and/or video synthesis models.
- Familiarity with 3D graphics concepts (meshes, point clouds, depth maps).
- GPU programming and optimization skills.
Nice to Have
- Knowledge of Three.js or WebGL for rendering AI outputs on the web.
- Familiarity with FFmpeg and video processing pipelines.
- Experience in cloud-based GPU environments (AWS/GCP).
Why Join Us?
- Work on cutting-edge AI and Computer Vision projects with global impact.
- Join a small, high-ownership team where your work matters.
- Opportunity to experiment, publish, and contribute to open-source.
- Competitive pay and flexible work setup.


Backend Engineer – AI Video Intelligence
Location: Remote
Type: Full-Time
Experience Required: 3–6 years
About the Role
We are hiring a Backend Engineer who is passionate about building highly performant, scalable backend systems that integrate deeply with AI and video technologies. This is an opportunity to work with a world-class engineering team, solving problems that power AI-driven products in production at scale.
If you enjoy crafting APIs, scaling backend architecture, and optimizing systems to millisecond-level response times — this is the role for you.
Responsibilities
● Build and scale backend systems using Node.js, Python, and TypeScript/Javascript
● Architect microservices for real-time video analysis and LLM integrations
● Optimize database queries, caching layers, and backend workflows
● Work closely with frontend, data, and DevOps teams
● Deploy, monitor, and scale services on AWS
● Own the backend stack from design to implementation
Must-Have Skills
● 3+ years of experience with backend systems in production
● Strong in Node.js, Python, and TypeScript
● Experience designing REST APIs and microservices
● Proficient in PostgreSQL, MySQL, and Redis
● Hands-on with AWS services like EC2, S3, Lambda, etc.
● Real-world experience integrating with AI/LLM APIs
● Strong debugging and profiling skills
Bonus Skills
● Familiarity with Rust, Golang, or Next.js
● Prior experience in video tech, data platforms, or SaaS products
● Knowledge of message queues (Kafka, RabbitMQ, etc.)
● Exposure to containerization and orchestration (Docker/Kubernetes)

AccioJob is conducting a Walk-In Hiring Drive with MakunAI Global for the position of Python Engineer.
To apply, register and select your slot here: https://go.acciojob.com/cE8XQy
Required Skills: DSA, Python, Django, Fast API
Eligibility:
- Degree: All
- Branch: All
- Graduation Year: 2022, 2023, 2024, 2025
Work Details:
- Work Location: Noida (Hybrid)
- CTC: 3.2 LPA to 3.5 LPA
Evaluation Process:
Round 1: Offline Assessment at AccioJob Skill Centers located in Noida, Greater Noida, and Delhi
Further Rounds (for shortlisted candidates only):
- Profile & Background Screening Round
- Technical Interview Round 1
- Technical Interview Round 2
Important Note: Bring your laptop & earphones for the test.
Register here: https://go.acciojob.com/cE8XQy
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence.
The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
- Design, build, and maintain scalable data pipelines for structured and unstructured data sources
- Develop ETL processes to collect, clean, and transform data from internal and external systems
- Support integration of data into dashboards, analytics tools, and reporting systems
- Collaborate with data analysts and software developers to improve data accessibility and performance
- Document workflows and maintain data infrastructure best practices
- Assist in identifying opportunities to automate repetitive data tasks


Research Intern
Position Overview
We are seeking a motivated Research Intern to join our AI research team, focusing on Text-to-Speech (TTS) and Automatic Speech Recognition (ASR) technologies. The intern will play a crucial role in evaluating our proprietary models against industry benchmarks, analyzing competitive voice agent platforms, and contributing to cutting-edge research in speech AI technologies.
Key Responsibilities
Model Evaluation & Benchmarking
- Conduct comprehensive evaluation of our TTS and ASR models against existing state-of-the-art models
- Design and implement evaluation metrics and frameworks for speech quality assessment
- Perform comparative analysis of model performance across different datasets and use cases
- Generate detailed reports on model strengths, weaknesses, and improvement opportunities
Competitive Analysis
- Evaluate and compare our voice agent platform with existing solutions (Vapi, Bland AI, and other competitors)
- Analyze feature sets, performance metrics, and user experience across different voice agent platforms
- Conduct technical deep-dives into competitive architectures and methodologies
- Provide strategic recommendations based on competitive landscape analysis
Research & Innovation
- Monitor and analyze emerging trends in ASR, TTS, and voice AI technologies
- Research novel approaches to improve ASR and TTS model performance
- Investigate new architectures, training techniques, and optimization methods
- Stay current with academic literature and industry developments in speech AI
Model Development & Training
- Assist in training TTS and ASR models on various datasets
- Implement and experiment with different model architectures and configurations
- Perform model fine-tuning for specific use cases and domains
- Optimize models for different deployment scenarios (edge, cloud, real-time)
- Conduct data preprocessing and augmentation for training datasets
Documentation & Reporting
- Maintain detailed documentation of experiments, methodologies, and results
- Prepare technical reports and presentations for internal stakeholders
- Contribute to research publications and technical blog posts
- Create visualization and analysis tools for model performance tracking
Required Qualifications
Education & Experience
- Currently pursuing or recently completed Bachelor's/Master's degree in Computer Science, Electrical Engineering, Machine Learning, or related field
- Strong academic background in machine learning, deep learning, and signal processing
- Previous experience with speech processing, NLP, or audio ML projects (academic or professional)
Technical Skills
- Programming Languages: Proficiency in Python; experience with PyTorch, TensorFlow
- Speech AI Frameworks: Experience with libraries like librosa, torchaudio, speechbrain, or similar
- Machine Learning: Strong understanding of deep learning architectures, training procedures, and evaluation methods
- Data Processing: Experience with audio data preprocessing, feature extraction, and dataset management
- Tools & Platforms: Familiarity with Colab or Jupyter notebooks, Git, Docker, and cloud platforms (AWS/GCP/Azure)
Preferred Qualifications
- Experience with transformer architectures, attention mechanisms, and sequence-to-sequence models
- Knowledge of speech synthesis techniques (WaveNet, Tacotron, FastSpeech, etc.)
- Understanding of ASR architectures (Wav2Vec, Whisper, Conformer, etc.)
- Experience with model optimization techniques (quantization, pruning, distillation)
- Familiarity with MLOps tools and model deployment pipelines
- Previous work with voice AI applications or conversational AI systems
Skills & Competencies
Technical Competencies
- Strong analytical and problem-solving abilities
- Ability to design and conduct rigorous experiments
- Experience with statistical analysis and performance metrics
- Understanding of audio signal processing fundamentals
- Knowledge of distributed training and large-scale model development
Soft Skills
- Excellent written and verbal communication skills
- Ability to work independently and manage multiple projects
- Strong attention to detail and commitment to reproducible research
- Collaborative mindset and ability to work in cross-functional teams
- Curiosity and passion for staying current with AI research trends
What You'll Gain
Learning Opportunities
- Hands-on experience with state-of-the-art speech AI technologies
- Exposure to full model development lifecycle from research to deployment
- Mentorship from experienced AI researchers and engineers
- Opportunity to contribute to cutting-edge research projects
Professional Development
- Experience with industry-standard tools and methodologies
- Opportunity to present research findings to technical and business stakeholders
- Potential for research publication and conference presentations
- Networking opportunities within the AI research community
Springer Capital is a cross-border asset management firm focused on real estate investment banking in China and the USA. We are offering a remote internship for individuals passionate about automation, cloud infrastructure, and CI/CD pipelines. Start and end dates are flexible, and applicants may be asked to complete a short technical quiz or assignment as part of the application process.
Responsibilities:
▪ Assist in building and maintaining CI/CD pipelines to automate development workflows
▪ Monitor and improve system performance, reliability, and scalability
▪ Manage cloud-based infrastructure (e.g., AWS, Azure, or GCP)
▪ Support containerization and orchestration using Docker and Kubernetes
▪ Implement infrastructure as code using tools like Terraform or CloudFormation
▪ Collaborate with software engineering and data teams to streamline deployments
▪ Troubleshoot system and deployment issues across development and production environments
Springer Capital is a cross-border asset management firm specializing in real estate investment banking between China and the USA. We are offering a remote internship for aspiring data engineers interested in data pipeline development, data integration, and business intelligence. The internship offers flexible start and end dates. A short quiz or technical task may be required as part of the selection process.
Responsibilities:
▪ Design, build, and maintain scalable data pipelines for structured and unstructured data sources
▪ Develop ETL processes to collect, clean, and transform data from internal and external systems
▪ Support integration of data into dashboards, analytics tools, and reporting systems
▪ Collaborate with data analysts and software developers to improve data accessibility and performance
▪ Document workflows and maintain data infrastructure best practices
▪ Assist in identifying opportunities to automate repetitive data tasks

Title – Python Developer (Healthcare RCM Automation)
· Department - Technology
· Shift - Night
· Location - Noida
· Education - Graduation
· Experience – Minimum 3-7 years of professional experience with below skills.
· Interpersonal Skills – Good Communication skills, positive attitude and should be confident.
Python Developer is responsible for building, debugging, and implement application projects using the Python programming language and developing program specifications and coded modules according to specifications and client standards.
Responsibilities
· Advanced Python Programming: Extensive experience in Python, with a deep understanding of Python principles, design patterns, and best practices. Proficiency in developing scalable and efficient Python code, with a focus on automation, data processing, and backend services. Demonstrated ability with automation libraries like PyAutoGUI for GUI automation tasks, enabling the automation of mouse and keyboard actions.
· Experience with Selenium for web automation: Capable of automating web browsers to mimic user actions, scrape web data, and test web applications.
· Python Frameworks and Libraries: Strong experience with popular Python frameworks and libraries relevant to data processing, web application development, and automation, such as Flask or Django for web development, Pandas and NumPy for data manipulation, and Celery for task scheduling.
· SQL Server Expertise: Advanced knowledge of SQL Server management and development.
· API Development and Integration: Experience in developing and consuming APIs. Understanding of API security best practices. Familiarity with integrating third-party services and APIs into the existing ecosystem.
· Version Control and CI/CD: Proficiency in using version control systems, such as Git. Experience with continuous integration and continuous deployment (CI/CD) pipelines
· Unit Testing and Debugging: Strong understanding of testing practices, including unit testing, integration testing. Experience with Python testing. Skilled in debugging and performance profiling.
· Containerization and Virtualization: Familiarity with containerization and orchestration tools, such as Docker and Kubernetes, to enhance application deployment and scalability.
Requirements & Skills
· Analytical Thinking: Ability to analyze complex problems and break them down into manageable parts. Strong logical reasoning and troubleshooting skills.
· Communication: Excellent verbal and written communication skills. Ability to effectively articulate technical challenges and solutions to both technical and non-technical team members.
· Team Collaboration: Experience working in agile development environments. Ability to work
collaboratively in cross-functional teams and with stakeholders from different backgrounds.
· Continuous Learning: A strong desire to learn new technologies and frameworks. Keeping up to date with industry trends and advancements in healthcare RCM, AI, and automation technologies.
Additional Preferred Skills:
· Industry-Specific Knowledge is a plus:
· Familiarity with healthcare industry standards and regulations, such as HIPAA, is highly advantageous.
· Understanding of healthcare revenue cycle management processes and challenges. Experience with healthcare data formats and standards (e.g., HL7, FHIR) is beneficial.
Educational Qualifications:
· Bachelor’s degree in a related field.,
· Relevant technical certifications are a plus


Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis and hence having good communication skills is a must.
Requirements
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 3+ years of experience developing applications on Django, Angular/React, HTML and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong troubleshooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP, and Azure
3. Big Data Technologies: Hadoop and Spark
4. Containerized Deployment: Dockers and Kubernetes is a plus.
5. Other: Understanding of Golang is a plus.


About Us
We are a company where the ‘HOW’ of building software is just as important as the ‘WHAT.’ We partner with large organizations to modernize legacy codebases and collaborate with startups to launch MVPs, scale, or act as extensions of their teams. Guided by Software Craftsmanship values and eXtreme Programming Practices, we deliver high-quality, reliable software solutions tailored to our clients' needs.
We thrive to:
- Bring our clients' dreams to life by being their trusted engineering partners, crafting innovative software solutions.
- Challenge offshore development stereotypes by delivering exceptional quality, and proving the value of craftsmanship.
- Empower clients to deliver value quickly and frequently to their end users.
- Ensure long-term success for our clients by building reliable, sustainable, and impactful solutions.
- Raise the bar of software craft by setting a new standard for the community.
Job Description
This is a remote position.
Our Core Values
- Quality with Pragmatism: We aim for excellence with a focus on practical solutions.
- Extreme Ownership: We own our work and its outcomes fully.
- Proactive Collaboration: Teamwork elevates us all.
- Pursuit of Mastery: Continuous growth drives us.
- Effective Feedback: Honest, constructive feedback fosters improvement.
- Client Success: Our clients’ success is our success.
Experience Level
This role is ideal for engineers with 4+ years of hands-on software development experience, particularly in Python and ReactJs at scale.
Role Overview
If you’re a Software Craftsperson who takes pride in clean, test-driven code and believes in Extreme Programming principles, we’d love to meet you. At Incubyte, we’re a DevOps organization where developers own the entire release cycle, meaning you’ll get hands-on experience across programming, cloud infrastructure, client communication, and everything in between. Ready to level up your craft and join a team that’s as quality-obsessed as you are? Read on!
What You'll Do
- Write Tests First: Start by writing tests to ensure code quality
- Clean Code: Produce self-explanatory, clean code with predictable results
- Frequent Releases: Make frequent, small releases
- Pair Programming: Work in pairs for better results
- Peer Reviews: Conduct peer code reviews for continuous improvement
- Product Team: Collaborate in a product team to build and rapidly roll out new features and fixes
- Full Stack Ownership: Handle everything from the front end to the back end, including infrastructure and DevOps pipelines
- Never Stop Learning: Commit to continuous learning and improvement
AI-First Development Focus
- Leverage AI tools like GitHub Copilot, Cursor, Augment, Claude Code, etc., to accelerate development and automate repetitive tasks.
- Use AI to detect potential bugs, code smells, and performance bottlenecks early in the development process.
- Apply prompt engineering techniques to get the best results from AI coding assistants.
- Evaluate AI generated code/tools for correctness, performance, and security before merging.
- Continuously explore, stay ahead by experimenting and integrating new AI powered tools and workflows as they emerge.
Requirements
What We're Looking For
- Proficiency in some or all of the following: ReactJS, JavaScript, Object Oriented Programming in JS
- 3+ years of Object-Oriented Programming with Python or equivalent
- 3+ years of experience working with relational (SQL) databases
- 3+ years of experience using Git to contribute code as part of a team of Software Craftspeople
AI Skills & Mindset
- Power user of AI assisted coding tools (e.g., GitHub Copilot, Cursor, Augment, Claude Code).
- Strong prompt engineering skills to effectively guide AI in crafting relevant, high-quality code.
- Ability to critically evaluate AI generated code for logic, maintainability, performance, and security.
- Curiosity and adaptability to quickly learn and apply new AI tools and workflows.
- AI evaluation mindset balancing AI speed with human judgment for robust solutions.
Benefits
What We Offer
- Dedicated Learning & Development Budget: Fuel your growth with a budget dedicated solely to learning.
- Conference Talks Sponsorship: Amplify your voice! If you’re speaking at a conference, we’ll fully sponsor and support your talk.
- Cutting-Edge Projects: Work on exciting projects with the latest AI technologies
- Employee-Friendly Leave Policy: Recharge with ample leave options designed for a healthy work-life balance.
- Comprehensive Medical & Term Insurance: Full coverage for you and your family’s peace of mind.
- And More: Extra perks to support your well-being and professional growth.
Work Environment
- Remote-First Culture: At Incubyte, we thrive on a culture of structured flexibility — while you have control over where and how you work, everyone commits to a consistent rhythm that supports their team during core working hours for smooth collaboration and timely project delivery. By striking the perfect balance between freedom and responsibility, we enable ourselves to deliver high-quality standards our customers recognize us by. With asynchronous tools and push for active participation, we foster a vibrant, hands-on environment where each team member’s engagement and contributions drive impactful results.
- Work-In-Person: Twice a year, we come together for two-week sprints to collaborate in person, foster stronger team bonds, and align on goals. Additionally, we host an annual retreat to recharge and connect as a team. All travel expenses are covered.
- Proactive Collaboration: Collaboration is central to our work. Through daily pair programming sessions, we focus on mentorship, continuous learning, and shared problem-solving. This hands-on approach keeps us innovative and aligned as a team.
Incubyte is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.

Location: Hybrid/ Remote
Type: Contract / Full‑Time
Experience: 5+ Years
Qualification: Bachelor’s or Master’s in Computer Science or a related technical field
Responsibilities:
- Architect & implement the RAG pipeline: embeddings ingestion, vector search (MongoDB Atlas or similar), and context-aware chat generation.
- Design and build Python‑based services (FastAPI) for generating and updating embeddings.
- Host and apply LoRA/QLoRA adapters for per‑user fine‑tuning.
- Automate data pipelines to ingest daily user logs, chunk text, and upsert embeddings into the vector store.
- Develop Node.js/Express APIs that orchestrate embedding, retrieval, and LLM inference for real‑time chat.
- Manage vector index lifecycle and similarity metrics (cosine/dot‑product).
- Deploy and optimize on AWS (Lambda, EC2, SageMaker), containerization (Docker), and monitoring for latency, costs, and error rates.
- Collaborate with frontend engineers to define API contracts and demo endpoints.
- Document architecture diagrams, API specifications, and runbooks for future team onboarding.
Required Skills
- Strong Python expertise (FastAPI, async programming).
- Proficiency with Node.js and Express for API development.
- Experience with vector databases (MongoDB Atlas Vector Search, Pinecone, Weaviate) and similarity search.
- Familiarity with OpenAI’s APIs (embeddings, chat completions).
- Hands‑on with parameters‑efficient fine‑tuning (LoRA, QLoRA, PEFT/Hugging Face).
- Knowledge of LLM hosting best practices on AWS (EC2, Lambda, SageMaker).
Containerization skills (Docker):
- Good understanding of RAG architectures, prompt design, and memory management.
- Strong Git workflow and collaborative development practices (GitHub, CI/CD).
Nice‑to‑Have:
- Experience with Llama family models or other open‑source LLMs.
- Familiarity with MongoDB Atlas free tier and cluster management.
- Background in data engineering for streaming or batch processing.
- Knowledge of monitoring & observability tools (Prometheus, Grafana, CloudWatch).
- Frontend skills in React to prototype demo UIs.

Ontrac Solutions is a leading technology consulting firm specializing in cutting-edge solutions that drive business transformation. We partner with organizations to modernize their infrastructure, streamline processes, and deliver tangible results.
Our client is actively seeking a Conversational AI Engineer with deep hands-on experience in Google Contact Center AI (CCAI) to join a high-impact digital transformation project via a GCP Premier Partner. As part of a staff augmentation model, you will be embedded within the client’s technology or contact center innovation team, delivering scalable virtual agent solutions that improve customer experience, agent productivity, and call deflection.
Key Responsibilities:
- Lead the design and buildout of Dialogflow CX/ES agents across chat and voice channels.
- Integrate virtual agents with client systems and platforms (e.g., Genesys, Twilio, NICE CXone, Salesforce).
- Develop fulfillment logic using Google Cloud Functions, Cloud Run, and backend integrations (via REST APIs and webhooks).
- Collaborate with stakeholders to define intent models, entity schemas, and user flows.
- Implement Agent Assist and CCAI Insights to augment live agent productivity.
- Leverage Google Cloud tools including Pub/Sub, Logging, and BigQuery to support and monitor the solution.
- Support tuning, regression testing, and enhancement of NLP performance using live utterance data.
- Ensure adherence to enterprise security and compliance requirements.
Required Skills & Qualifications:
- 3+ years developing conversational AI experiences, including at least 1–2 years with Google Dialogflow CX or ES.
- Solid experience across GCP services (Functions, Cloud Run, IAM, BigQuery, etc.).
- Strong skills in Python or Node.js for webhook fulfillment and backend logic.
- Knowledge of NLU/NLP modeling, training, and performance tuning.
- Prior experience working in client-facing or embedded roles via consulting or staff augmentation.
- Ability to communicate effectively with technical and business stakeholders.
Nice to Have:
- Hands-on experience with Agent Assist, CCAI Insights, or third-party CCaaS tools (Genesys, Twilio Flex, NICE CXone).
- Familiarity with Vertex AI, AutoML, or other GCP ML services.
- Experience in regulated industries (healthcare, finance, insurance, etc.).
- Google Cloud certification in Cloud Developer or CCAI Engineer.


Ops Analysts/Sys Admin
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Ops Analysts/Sys Admin
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
External Job Title :
Systems Engineer 1
Position Responsibilities :
We are seeking a highly skilled and motivated System Engineer to join our team. Apart from a strong technical background, excellent problem-solving abilities, and a collaborative mindset the ideal candidate will be a self-starter with a high level of initiative and a passion for experimentation. This role requires someone who thrives in a fast-paced environment and is eager to take on new challenges.
- Technical Skills:
Must Have Skills:
- PHP
- SQL; Relational Database Concepts
- At least one scripting language (Python, PowerShell, Bash, UNIX, etc.)
- Experience with Learning and Utilizing APIs
Nice to Have Skills:
- Experience with AI Initiatives & exposure of GenAI and/or Agentic AI projects
- Microsoft Power Apps
- Microsoft Power BI
- Atomic
- Snowflake
- Cloud-Based Application Development
- Gainsight
- Salesforce
- Soft Skills:
Must Have Skills:
- Flexible Mindset for Solution Development
- Independent and Self-Driven; Autonomous
- Investigative; drives toward resolving Root Cause of Stakeholder needs instead of treating Symptoms; Critical Thinker
- Collaborative mindset to drive best results
Nice to Have Skills:
- Business Acumen (Very Nice to Have)
- Responsibilities:
- Develop and maintain system solutions to meet stakeholder needs.
- Collaborate with team members and stakeholders to ensure effective communication and teamwork.
- Independently drive projects and tasks to completion with minimal supervision.
- Investigate and resolve root causes of issues, applying critical thinking to develop effective solutions.
- Adapt to changing requirements and maintain a flexible approach to solution development.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 2-3 years of experience programming skills on PHP, Powe BI or Snowflake, Python and API Integration.
- Proven experience in system engineering or a related field.
- Strong technical skills in the required areas.
- Excellent problem-solving and critical thinking abilities.
- Ability to work independently and as part of a team.
- Strong communication and collaboration skills.

Greentree Capital is seeking an enthusiastic Software Network Engineering Intern to join our growing team. This internship will focus on integrating artificial intelligence and ChatGPT functionalities into our AI-powered chatbot systems across various investment project websites. The ideal candidate will have a passion for artificial intelligence and software development, with a strong foundation in programming and problem-solving.
Responsibilities:
- Collaborate with the development team to design and implement AI-driven chatbot functionalities for our investment project websites.
- Assist in the integration of ChatGPT into existing systems to enhance user interaction and experience using artificial intelligence technologies.
- Conduct research on artificial intelligence technologies, natural language processing (NLP), and recommend best practices for chatbot development.
- Test and debug AI-driven chatbot functionalities to ensure optimal performance and user satisfaction.
- Gather and analyze user feedback to improve AI-driven chatbot responses and capabilities.
- Document technical specifications and processes for the development of AI-enhanced chatbot features.
Qualifications
- Strong foundation in programming languages such as Python, Java, or JavaScript.
- Familiarity with AI concepts, machine learning, and natural language processing (NLP) technologies.
- Experience with chatbot development and/or artificial intelligence frameworks is a plus.
- Ability to work in a fast-paced environment, be self-motivated, organized, and detail-oriented.
- Excellent communication skills, with the ability to work collaboratively within a team.
- A keen interest in learning about new artificial intelligence technologies and business processes.
Education & Experience
- Currently pursuing a Bachelor’s degree in Computer Science, Software Engineering, or a related field.
- Previous experience in software development or AI-related projects is preferred but not required.
Greentree’s website: www.greentree.group

We are a forward-thinking company Hookux seeking a skilled Full Stack Developer to join our team. You will work on a variety of exciting projects that require problem-solving, innovation, and scalability. One such project is, a stock market and crypto investing simulation platform that teaches children financial skills through gamified competition.
Key Responsibilities:
- Develop and maintain robust, scalable, and efficient front-end and back-end systems.
- Collaborate with cross-functional teams to define, design, and ship new features.
- Design and implement API endpoints and server-side logic.
- Work closely with the design and product teams to ensure the technical feasibility of UI/UX designs.
- Optimize the application for maximum speed and scalability.
- Write well-documented, clean code.
- Troubleshoot and debug applications.
- Stay up-to-date with emerging technologies and industry trends.
Technical Skills & Experience:
- Proficient in JavaScript/TypeScript, with expertise in React.js for front-end development.
- Strong experience with Node.js, Express.js, or other backend technologies.
- Familiarity with database technologies such as MongoDB, PostgreSQL, or MySQL.
- Experience with RESTful APIs and third-party integrations.
- Knowledge of cloud platforms like AWS, Azure, or Google Cloud.
- Proficient in version control (e.g., Git) and collaboration tools.
- Experience with agile methodologies and continuous integration/deployment (CI/CD).
Bonus Skills:
- Experience with React Native for mobile app development.
- Familiarity with blockchain technology or cryptocurrency-related platforms.
- Experience with containerization (e.g., Docker, Kubernetes).
- Knowledge of testing frameworks and tools.
Qualifications:
- Bachelor's degree in Computer Science, Engineering, or related field (or equivalent practical experience).
- years of experience in full stack development.
- Ability to manage multiple priorities and work independently as well as in a team environment.
Benefits:
- Competitive salary and performance bonuses.
- Opportunities for career growth and learning.
- Flexible working hours and remote working options.
What We Offer:
- Competitive salary or hourly rate
- Flexible working hours
- Opportunity to work on impactful, real-world projects
- Creative and supportive team environment
mail me your CV and portfolio at hr @ hookux.com



Remote Job Opportunity
Job Title: Data Scientist
Contract Duration: 6 months+
Location: offshore India
Work Time: 3 pm to 12 am
Must have 4+ Years of relevant experience.
Job Summary:
We are seeking an AI Data Scientist with a strong foundation in machine learning, deep learning, and statistical modeling to design, develop, and deploy cutting-edge AI solutions.
The ideal candidate will have expertise in building and optimizing AI models, with a deep understanding of both statistical theory and modern AI techniques. You will work on high-impact projects, from prototyping to production, collaborating with engineers, researchers, and business stakeholders to solve complex problems using AI.
Key Responsibilities:
Research, design, and implement machine learning and deep learning models for predictive and generative AI applications.
Apply advanced statistical methods to improve model robustness and interpretability.
Optimize model performance through hyperparameter tuning, feature engineering, and ensemble techniques.
Perform large-scale data analysis to identify patterns, biases, and opportunities for AI-driven automation.
Work closely with ML engineers to validate, train, and deploy the models.
Stay updated with the latest research and developments in AI and machine learning to ensure innovative and cutting-edge solutions.
Qualifications & Skills:
Education: PhD or Master's degree in Statistics, Mathematics, Computer Science, or a related field.
Experience:
4+ years of experience in machine learning and deep learning, with expertise in algorithm development and optimization.
Proficiency in SQL, Python and visualization tools ( Power BI).
Experience in developing mathematical models for business applications, preferably in finance, trading, image-based AI, biomedical modeling, or recommender systems industries
Strong communication skills to interact effectively with both technical and non-technical stakeholders.
Excellent problem-solving skills with the ability to work independently and as part of a team.



Role Objective
Develop business relevant, high quality, scalable web applications. You will be part of a dynamic AdTech team solving big problems in the Media and Entertainment Sector.
Roles & Responsibilities
* Application Design: Understand requirements from the user, create stories and be a part of the design team. Check designs, give regular feedback and ensure that the designs are as per user expectations.
* Architecture: Create scalable and robust system architecture. The design should be in line with the client infra. This could be on-prem or cloud (Azure, AWS or GCP).
* Development: You will be responsible for the development of the front-end and back-end. The application stack will comprise of (depending on the project) SQL, Django, Angular/React, HTML, CSS. Knowledge of GoLang and Big Data is a plus point.
* Deployment: Suggest and implement a deployment strategy that is scalable and cost-effective. Create a detailed resource architecture and get it approved. CI/CD deployment on IIS or Linux. Knowledge of dockers is a plus point.
* Maintenance: Maintaining development and production environments will be a key part of your job profile. This will also include trouble shooting, fixing bugs and suggesting ways for improving the application.
* Data Migration: In the case of database migration, you will be expected to suggest appropriate strategies and implementation plans.
* Documentation: Create a detailed document covering important aspects like HLD, Technical Diagram, Script Design, SOP etc.
* Client Interaction: You will be interacting with the client on a day-to-day basis, and hence having good communication skills is a must.
**Requirements**
Education-B. Tech (Comp. Sc, IT) or equivalent
Experience- 1+ years of experience developing applications on Django, Angular/React, HTML, and CSS
Behavioural Skills-
1. Clear and Assertive communication
2. Ability to comprehend the business requirement
3. Teamwork and collaboration
4. Analytics thinking
5. Time Management
6. Strong Troubleshooting and problem-solving skills
Technical Skills-
1. Back-end and Front-end Technologies: Django, Angular/React, HTML and CSS.
2. Cloud Technologies: AWS, GCP, and Azure
3. Big Data Technologies: Hadoop and Spark is a plus
4. Containerized Deployment: Docker and Kubernetes are a plus.
5. Other: Understanding of Golang is a plus.