50+ Remote Python Jobs in India
Apply to 50+ Remote Python Jobs on CutShort.io. Find your next job, effortlessly. Browse Python Jobs and apply today!



Software Development Intern
About This Role
We're building next-generation browser agents that combine accuracy, security, and advanced task learning capabilities. We're looking for self-driven, independent interns who thrive on exploration and problem-solving to help us push the boundaries of what's possible with intelligent web automation.
This isn't a traditional learning internship—we want builders who have already proven they can ship projects and tackle challenges autonomously. You'll work across our full tech stack, from backend APIs to frontend interfaces, with access to cutting-edge AI-powered development tools while contributing to the future of browser automation.
What You'll Do
- Develop intelligent browser agents with advanced task learning and execution capabilities
- Build secure automation systems that can navigate complex web environments accurately
- Create robust AI-powered workflows using LangChain and modern ML frameworks
- Design and implement security measures for safe browser automation
- Create comprehensive test environments for agent validation and performance testing
- Debug and fix application bugs across the full stack to ensure reliable agent operation
- Solve complex problems independently using AI code assistants (Cursor, v0.dev, etc.)
- Explore and experiment with new technologies in AI agent development
- Own projects end-to-end from conception to deployment
- Work across the full stack as needed—no rigid role boundaries
Our Tech Stack
Backend:
- Python with FastAPI
- LangChain for AI/ML workflows
- Google Cloud Platform (GCP) services
- Supabase for database and authentication
Frontend:
- JavaScript/TypeScript
- React for web interfaces
- Electron for desktop applications
Development Tools:
- Cursor IDE with AI assistance
- v0.dev for rapid prototyping
- Modern DevOps and CI/CD pipelines
Flexibility:
- Choose your own tech stack when needed - We're open to new tools and frameworks that solve problems better
- Experiment with cutting-edge technologies - If you find a better solution, we're all ears
What We're Looking For
Required Experience
- Proven project portfolio - Show us what you've built, not what you've learned
- Full-stack development experience with Python and JavaScript
- Independent problem-solving skills - You research, experiment, and find solutions
- Experience with modern frameworks (FastAPI, React, or similar)
- Cloud platform familiarity (GCP, AWS, or Azure)
Ideal Candidates Have
- Built and deployed real applications (personal projects, hackathons, open source)
- Experience with browser automation (Selenium, Playwright, Puppeteer, or similar)
- AI/ML model integration experience (LangChain, OpenAI APIs, agent frameworks)
- Security-focused development understanding of web security principles
- Task learning and reinforcement learning familiarity
- Testing and debugging experience with automated systems and complex applications
- Test environment setup and CI/CD pipeline experience
- Database design and optimization experience
- Desktop application development (Electron or similar)
- DevOps and infrastructure automation knowledge
What We Offer
- Work on cutting-edge browser agent technology - Shape the future of intelligent web automation
- Cutting-edge AI development tools - Full access to Cursor, v0.dev, and other AI assistants
- Technology freedom - Choose the best tools for the job, not just what's already in the stack
- Real project ownership - Your work will directly impact our next-gen browser agents
- Flexible exploration time - Dedicate time to experiment with new AI/ML approaches
- Mentorship from experienced developers - When you need it, not constant hand-holding
- Remote-first environment with flexible working hours
- Competitive internship compensation
What Makes You Stand Out
- Self-starter mentality - You don't wait for detailed instructions
- Curiosity-driven exploration - You love diving into new technologies
- Problem-solving resilience - You debug, research, and iterate until it works
- Quality-focused delivery - You ship polished, well-tested code
- Open source contributions or active GitHub presence
- Technology adaptability - You can evaluate and adopt new tools when they solve problems better
Application Requirements
- Portfolio/GitHub - Show us your best projects with live demos
- Brief cover letter - Tell us about a challenging problem you solved independently
- Technical challenge - We'll provide a small project to assess your problem-solving approach
Not a Good Fit If You
- Need constant guidance and structured learning paths
- Prefer working on assigned tasks without creative input
- Haven't built substantial projects outside of coursework
- Are looking primarily for resume building rather than real contribution
Ready to build something amazing? Send us your portfolio and let's see what you can create with unlimited access to AI development tools and real-world challenges.
We're an equal opportunity employer committed to diversity and inclusion.

Data Science Analyst – Remote
Springer Capital is a real estate investment firm based in Chicago, Shanghai, and Hong Kong. Springer engages in Capital Advisory for APAC Private Equity and Asset Management making financial investments in real estate and other sectors in US markets.
Springer seeks an Data Science Analyst to join the Technology side of the company. The internship can be onsite in Shanghai or conducted remotely. The start date of the internship is flexible.
Job Highlights
As an intern for the data analysis team, you will be focusing on researching and developing tools and workflow that automate some parts of our business automation. As business automation is important throughout the firm, you will have the opportunity to collaborate with teams across Springer.
What you will do as an intern:
Collect data from various sources, including databases, APIs, and web scraping tools.
Clean and process raw data to ensure it is accurate and consistent.
Analyze data to extract insights using computational tools, such as Excel, SQL, and Python.
Communicate insights in a clear and concise manner with the manager along your progress.
Implement solutions based on insights you discovered to improve business processes or solve problems.
Our commitment to your development:
Overarching and detailed training materials before interns hit the desk
Interns will have group calls with the director and supervisors regularly for up-to-date and constructive feedback
Greater leadership and responsibilities will be given to interns based on work quality
Who we are looking for:
Strong experience using Excel.
Passionate about analyzing data.
Experience (preferred) with data analysis.
Research and problem-solving abilities
About Springer:
Springer Capital focuses on raising capital and solving capital issues in the real estate private equity market. We have experience raising capital for clients across the entire capital spectrum. Client relationships are a top priority for Springer. We establish long-term relationships with investors and lenders as well as have active dialogues with private equities, family offices, pension funds, infrastructure funds, and independent sponsors across Asia. With Springer’s expertise in the housing market, we ensure all parties are aligned in land acquisitions, development, improvements, sales, and lease-up.
Technical and legal Information
The internship enrollment period is flexible. The expected hours are 20 per week for ~3 months. Upon completion, you will receive an internship confirmation letter, or you can apply to your school for internship credit.


Build a dynamic solution empowering companies to optimize promotional activities for maximum impact. It collects and validates data, analyzes promotion effectiveness, plans calendars, and integrates seamlessly with existing systems. The tool enhances vendor collaboration, negotiates better deals, and employs machine learning to optimize promotional plans, enabling companies to make informed decisions and maximize return on investment.
Technology Stack: Scala, Go-Lang, Docker, Kubernetes, Databricks, Python optional.
Working Time Zone: EU
Specialty: Data Science
Level of the Candidate: more then 5 years of experience
Language Proficiency: English Upper-Intermediate
Required Soft Skills:
- Prefer problem solving style over experience
- Ability to clarify requirements with the customer
- Willingness to pair with other engineers when solving complex issues
- Good communication skills
Hard Skills / Need to Have:
- Experience in Scala and/or Go, designing and building scalable high-performing applications
- Experience in containerization and microservices orchestration using Docker and Kubernetes
- Experience in building data pipelines and ETL solutions using Databricks
- Experience in data storage and retrieval with PostgreSQL and Elasticsearch
- Experience in deploying and maintaining solutions in the Azure cloud environment
- Experience in Python is nice to have
Responsibilities and Tasks:
- Develop and maintain distributed systems using Scala and/or Go
- Work with Docker and Kubernetes for containerization and microservices orchestration
- Build data pipelines and ETL solutions using Databricks
- Work with PostgreSQL and Elasticsearch for data storage and retrieval
- Deploy and maintain solutions in the Azure cloud environment

Snowflake Data Engineer
Job Description:
· Overall Experience : 5+ years of experience in Snowflake and Python.
· Experience of 5+ years in data preparation. BI projects to understand business requirements in BI context and understand data model to transform raw data into meaningful data using snowflake and Python.
· Designing and creating data models that define the structure and relationships of various data elements within the organization. This includes conceptual, logical, and physical data models, which help ensure data accuracy, consistency, and integrity.
· Designing data integration solutions that allow different systems and applications to share and exchange data seamlessly. This may involve selecting appropriate integration technologies, developing ETL (Extract, Transform, Load) processes, and ensuring data quality during the integration process.
· Create and maintain optimal data pipeline architecture.
· Good knowledge of cloud platforms like AWS/Azure/GCP
· Good hands-on knowledge of Snowflake is a must. Experience with various data ingestion methods (Snow pipe & others), time travel and data sharing and other Snowflake capabilities
· Good knowledge of Python/Py Spark, advanced features of Python
· Support business development efforts (proposals and client presentations).
· Ability to thrive in a fast-paced, dynamic, client-facing role where delivering solid work products to exceed high expectations is a measure of success.
· Excellent leadership and interpersonal skills.
· Eager to contribute to a team-oriented environment.
· Strong prioritization and multi-tasking skills with a track record of meeting deadlines.
· Ability to be creative and analytical in a problem-solving environment.
· Effective verbal and written communication skills.
· Adaptable to new environments, people, technologies, and processes
· Ability to manage ambiguity and solve undefined problems.

Job Title: AI Integration Specialist
Location: Remote
Qualifications:
- Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field, or equivalent practical experience.
- Proven 6+ years of hands-on experience in designing, developing, and deploying integrations using the Workato platform.
- Strong understanding of Integration Platform as a Service (iPaaS) concepts and enterprise application integration (EAI) patterns.
- Demonstrated experience with Workato's AI capabilities, including building solutions with AI by Workato, IDP, or Agentic AI.
- Proficiency in working with various APIs (REST, SOAP), webhooks, and different data formats (JSON, XML, CSV).
- Experience with scripting languages (e.g., Ruby, Python, JavaScript) for custom logic within Workato recipes is a plus.
- Solid understanding of database concepts and experience with SQL/NoSQL databases.
- Excellent problem-solving, analytical, and troubleshooting skills with keen attention to detail.
- Strong communication and interpersonal skills, with the ability to effectively collaborate with technical and non-technical stakeholders.
- Workato certifications (e.g., Automation Pro I, II, III, Integration Developer) are highly desirable.
- Experience with Agile/Scrum methodologies is a plus.

At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently seeking 4 DevOps Support Engineer to join one of our clients' teams in India who can start until 20th of July. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
Job requirements
Key Responsibilities:
- Monitor and troubleshoot AWS and/or Azure environments to ensure optimal performance and availability.
- Respond promptly to incidents and alerts, investigating and resolving issues efficiently.
- Perform basic scripting and automation tasks to streamline cloud operations (e.g., Bash, Python).
- Communicate clearly and fluently in English with customers and internal teams.
- Collaborate closely with the Team Lead, following Standard Operating Procedures (SOPs) and escalation workflows.
- Work in a rotating shift schedule, including weekends and nights, ensuring continuous support coverage.
Shift Details:
- Engineers rotate shifts, typically working 4–5 shifts per week.
- Each engineer works about 4 to 5 shifts per week, rotating through morning, evening, and night shifts—including weekends—to cover 24/7 support evenly among the team
- Rotation ensures no single engineer is always working nights or weekends; the load is shared fairly among the team.
Qualifications:
- 2–5 years of experience in DevOps or cloud support roles.
- Strong familiarity with AWS and/or Azure cloud environments.
- Experience with CI/CD tools such as GitHub Actions or Jenkins.
- Proficiency with monitoring tools like Datadog, CloudWatch, or similar.
- Basic scripting skills in Bash, Python, or comparable languages.
- Excellent communication skills in English.
- Comfortable and willing to work in a shift-based support role, including night and weekend shifts.
- Prior experience in a shift-based support environment is preferred.
What We Offer:
- Remote work opportunity — work from anywhere in India with a stable internet connection.
- Comprehensive training program including:
- Shadowing existing processes to gain hands-on experience.
- Learning internal tools, Standard Operating Procedures (SOPs), ticketing systems, and escalation paths to ensure smooth onboarding and ongoing success.


Job Title: Technical Account Manager (Remote)
We are seeking a Technical Account Manager (TAM) to join our team and support a fast-growing, global SaaS company in the observability and log management space. This is a remote role, ideal for a technically skilled individual who is passionate about customer success and cloud technologies.
Key Responsibilities:
- Act as the primary technical contact for assigned customers, ensuring smooth onboarding, implementation, and continuous engagement.
- Solve customer technical challenges by integrating data sources, optimizing platform use, and ensuring successful adoption of the product.
- Understand customers’ business goals and technical environments to deliver tailored solutions and meaningful insights.
- Lead onboarding processes including integration setup, artifact creation, training sessions, and troubleshooting.
- Build and execute strategic plans for each customer based on data analysis, technical requirements, and business objectives.
- Stay current on observability, log management, and monitoring trends to provide best practice guidance.
- Conduct Quarterly Business Reviews (QBRs) to highlight value delivered and align future goals.
- Collaborate cross-functionally with Product, Engineering, and Sales to represent customer needs and influence the product roadmap.
- Support Sales on renewals, upsells, cross-sells, and customer expansion opportunities.
Requirements:
- Proven experience in a customer-facing technical role (e.g. TAM, Solutions Engineer, Support Engineer, etc.).
- Strong understanding of cloud infrastructure, log management, and observability tools.
- Experience with technical integrations, complex troubleshooting, and working with APIs.
- Excellent communication skills in English – both written and verbal.
- Strong interpersonal and presentation skills with the ability to engage both technical and non-technical stakeholders.
- Ability to manage multiple customer accounts and projects effectively.
- Comfortable coding in at least one modern programming language (e.g., Java, Python, Go) – an advantage.
- Bachelor’s degree in Computer Science, Engineering, or a related field – preferred.
- Experience in SaaS B2B software companies – an advantage.
What We Offer:
- Remote-first work culture with flexible hours
- Opportunity to work with leading-edge technology in the observability and log management space
- Collaborative and innovative team environment
- Exposure to global enterprise clients and technical challenges
Job Title: Senior/Lead Performance Test Engineer (JMeter Specialist)
Experience: 5-10 Years
Location: Remote / Pune, India
Job Summary:
We are looking for a highly skilled and experienced Senior/Lead Performance Test Engineer with a strong background in Apache JMeter to lead and execute performance testing initiatives for our web and mobile applications. The ideal candidate will be a hands-on expert in designing, scripting, executing, and analyzing complex performance tests, identifying bottlenecks, and collaborating with cross-functional teams to optimize system performance. This role is critical in ensuring our applications deliver exceptional user experiences under various load conditions.
Key Responsibilities:
Performance Test Strategy & Planning:
Define, develop, and implement comprehensive performance test strategies and plans aligned with project requirements and business objectives for web and mobile applications.
Collaborate with product owners, developers, architects, and operations teams to understand non-functional requirements (NFRs) and service level agreements (SLAs).
Determine appropriate performance test types (Load, Stress, Endurance, Spike, Scalability) and define relevant performance metrics and acceptance criteria.
Scripting & Test Development (JMeter Focus):
Design, develop, and maintain robust and scalable performance test scripts using Apache JMeter for various protocols (HTTP/S, REST, SOAP, JDBC, etc.).
Implement advanced JMeter features including correlation, parameterization, assertions, custom listeners, and logic controllers to simulate realistic user behavior.
Develop modular and reusable test assets.
Integrate performance test scripts into CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps) for continuous performance monitoring.
Test Execution & Monitoring:
Set up and configure performance test environments, ensuring they accurately mimic production infrastructure (including cloud environments like AWS, Azure, GCP).
Execute performance tests in various environments, managing large-scale load generation using JMeter (standalone or distributed mode).
Monitor system resources (CPU, Memory, Disk I/O, Network) and application performance metrics using various tools (e.g., Grafana, Prometheus, ELK stack, AppDynamics, Dynatrace, New Relic) during test execution.
Analysis & Reporting:
Analyze complex performance test results, identify performance bottlenecks, and pinpoint root causes across application, database, and infrastructure layers.
Interpret monitoring data, logs, and profiling reports to provide actionable insights and recommendations for performance improvements.
Prepare clear, concise, and comprehensive performance test reports, presenting findings, risks, and optimization recommendations to technical and non-technical stakeholders.
Collaboration & Mentorship:
Work closely with development and DevOps teams to troubleshoot, optimize, and resolve performance issues.
Act as a subject matter expert in performance testing, providing technical guidance and mentoring to junior team members.
Contribute to the continuous improvement of performance testing processes, tools, and best practices.
Required Skills & Qualifications:
Bachelor's degree in Computer Science, Engineering, or a related field.
5-10 years of hands-on experience in performance testing, with a strong focus on web and mobile applications.
Expert-level proficiency with Apache JMeter for scripting, execution, and analysis.
Strong understanding of performance testing methodologies, concepts (e.g., throughput, response time, latency, concurrency), and lifecycle.
Experience with performance monitoring tools such as Grafana, Prometheus, CloudWatch, Azure Monitor, GCP Monitoring, AppDynamics, Dynatrace, or New Relic.
Solid understanding of web technologies (HTTP/S, REST APIs, WebSockets, HTML, CSS, JavaScript) and modern application architectures (Microservices, Serverless).
Experience with database performance analysis (SQL/NoSQL) and ability to write complex SQL queries.
Familiarity with cloud platforms (AWS, Azure, GCP) and experience in testing applications deployed in cloud environments.
Proficiency in scripting languages (e.g., Groovy, Python, Shell scripting) for custom scripting and automation.
Excellent analytical, problem-solving, and debugging skills.
Strong communication (written and verbal) and interpersonal skills, with the ability to effectively collaborate with diverse teams and stakeholders.
Ability to work independently, manage multiple priorities, and thrive in a remote or hybrid work setup.
Good to Have Skills:
Experience with other performance testing tools (e.g., LoadRunner, Gatling, k6, BlazeMeter).
Knowledge of CI/CD pipelines and experience integrating performance tests into automated pipelines.
Understanding of containerization technologies (Docker, Kubernetes).
Experience with mobile application performance testing tools and techniques (e.g., device-level monitoring, network emulation).
Certifications in performance testing or cloud platforms.
Salesforce DevOps/Release Engineer
Resource type - Salesforce DevOps/Release Engineer
Experience - 5 to 8 years
Norms - PF & UAN mandatory
Resource Availability - Immediate or Joining time in less than 15 days
Job - Remote
Shift timings - UK timing (1pm to 10 pm or 2pm to 11pm)
Required Experience:
- 5–6 years of hands-on experience in Salesforce DevOps, release engineering, or deployment management.
- Strong expertise in Salesforce deployment processes, including CI/CD pipelines.
- Significant hands-on experience with at least two of the following tools: Gearset, Copado,Flosum.
- Solid understanding of Salesforce architecture, metadata, and development lifecycle.
- Familiarity with version control systems (e.g., Git) and agile methodologies
Key Responsibilities:
- Design, implement, and manage CI/CD pipelines for Salesforce deployments using Gearset, Copado, or Flosum.
- Automate and optimize deployment processes to ensure efficient, reliable, and repeatable releases across Salesforce environments.
- Collaborate with development, QA, and operations teams to gather requirements and ensurealignment of deployment strategies.
- Monitor, troubleshoot, and resolve deployment and release issues.
- Maintain documentation for deployment processes and provide training on best practices.
- Stay updated on the latest Salesforce DevOps tools, features, and best practices.
Technical Skills:
- Deployment ToolsHands-on with Gearset, Copado, Flosum for Salesforce deployments
- CI/CDBuilding and maintaining pipelines, automation, and release management
- Version ControlProficiency with Git and related workflows
- Salesforce PlatformUnderstanding of metadata, SFDX, and environment management
- Scripting
- Familiarity with scripting (e.g., Shell, Python) for automation (preferred)
- Communication
- Strong written and verbal communication skills
Preferred Qualifications:
Bachelor’s degree in Computer Science, Information Technology, or related field.
Certifications:
Salesforce certifications (e.g., Salesforce Administrator, Platform Developer I/II) are a plus.
Experience with additional DevOps tools (Jenkins, GitLab, Azure DevOps) is beneficial.
Experience with Salesforce DX and deployment strategies for large-scale orgs.

We are looking for an experienced and dynamic to join our team. The ideal candidate will be responsible for designing, developing, and delivering high-quality technical training programs to students.


We are looking for a dynamic and skilled Business Analyst Trainer with 2 to 5 years of hands-on industry and/or teaching experience. The ideal candidate should be able to simplify complex data concepts, mentor aspiring professionals, and deliver effective training programs in Business Analyst, Power BI, Tableau, Machine learning

🔍 We’re Hiring: Solution Engineer (Observability)
📍 Location: ["Remote/Onsite"]
🏢 Company: Product-Based Client
🕒 Experience: 5+ Years
💼 Employment Type: Full-Time
About the Role:
We are hiring for a Solution Engineer – Observability role with one of our fast-scaling product-based clients. This position is ideal for someone with strong technical acumen and exceptional communication skills who enjoys working at the intersection of engineering and customer success.
As a Solution Engineer, you will lead technical conversations with a range of personas—from DevOps teams to C-suite executives—while delivering innovative observability solutions that showcase real value.
Key Responsibilities:
- 🤝 Collaborate closely with Account Executives on technical sales strategy and execution for complex deals.
- 🎤 Deliver engaging product demos and technical presentations tailored to various stakeholder levels.
- 🛠️ Manage technical sales activities including discovery, sizing, architecture planning, and Proof of Concepts (POCs).
- 🔧 Design and implement custom solutions to bridge product gaps and extend core functionality.
- 💡 Provide expert guidance on observability best practices, tools, and frameworks that align with customer needs.
- 📈 Stay current with industry trends, tools, and the evolving Observability ecosystem.
Requirements:
- ✅ 5+ years in a customer-facing technical role such as Pre-Sales Engineer, Solutions Architect, or Technical Consultant.
- ✅ Strong communication, interpersonal, and presentation skills—able to convey complex topics clearly and persuasively.
- ✅ Proven experience with technical integration, conducting POCs, and building tailored observability solutions.
- ✅ Proficiency in one or more programming languages: Java, Go, or Python.
- ✅ Solid understanding of Observability, Monitoring, Log Management, and SIEM tools and methodologies.
- ✅ Familiarity with observability-related platforms such as APM, RUM, and Log Analytics is desirable.
- ✅ Strong hands-on expertise in:
- Cloud platforms: AWS, Azure, GCP
- Containerization & Orchestration: Docker, Kubernetes
- Monitoring stacks: Prometheus, OpenTelemetry
Bonus Points For:
- 🧠 Previous experience in technical sales within APM, Logging, Monitoring, or SIEM platforms
Why Join?
- Work with a cutting-edge product solving complex observability challenges
- Be a key voice in the pre-sales and solutioning cycle
- Partner with cross-functional teams and engage directly with top-tier clients
- Enjoy a collaborative, high-growth environment focused on innovation and performance


We are seeking a passionate and knowledgeable Data Science and Data Analyst Trainer to deliver engaging and industry-relevant training programs. The trainer will be responsible for teaching core concepts in data analytics, machine learning, data visualization, and related tools and technologies. The ideal candidate will have hands-on experience in the data domain with 2-5 years and a flair for teaching and mentoring students or working professionals.

About Us:
Heyo & MyOperator are India’s largest conversational platforms, delivering Call + WhatsApp engagement solutions to over 40,000+ businesses. Trusted by brands like Astrotalk, Lenskart, and Caratlane, we power customer engagement at scale. We support a hybrid work model, foster a collaborative environment, and offer fast-track growth opportunities.
Job Overview:
We are looking for a skilled Quality Analyst with 2-4 years of experience in software quality assurance. The ideal candidate should have a strong understanding of testing methodologies, automation tools, and defect tracking to ensure high-quality software products. This is a fully
remote role.
Key Responsibilities:
● Develop and execute test plans, test cases, and test scripts for software products.
● Conduct manual and automated testing to ensure reliability and performance.
● Identify, document, and collaborate with developers to resolve defects and issues.
● Report testing progress and results to stakeholders and management.
● Improve automation testing processes for efficiency and accuracy.
● Stay updated with the latest QA trends, tools, and best practices.
Requirements Skills:
● 2-4 years of experience in software quality assurance.
● Strong understanding of testing methodologies and automated testing.
● Proficiency in Selenium, Rest Assured, Java, and API Testing (mandatory).
● Familiarity with Appium, JMeter, TestNG, defect tracking, and version control tools.
● Strong problem-solving, analytical, and debugging skills.
● Excellent communication and collaboration abilities.
● Detail-oriented with a commitment to delivering high-quality results.
Why Join Us?
● Fully remote work with flexible hours.
● Exposure to industry-leading technologies and practices.
● Collaborative team culture with growth opportunities.
● Work with top brands and innovative projects.

We’re seeking a passionate and skilled Technical Trainer to deliver engaging, hands-on training in HTML, CSS, and Python-based front-end development. You’ll mentor learners, design curriculum, and guide them through real-world projects to build strong foundational and practical skills.


Job Title : Senior Python Developer
Experience : 7+ Years
Location : Remote or Hybrid (Gurgaon / Coimbatore / Hyderabad)
Job Summary :
We are looking for a highly skilled and motivated Senior Python Developer to join our dynamic engineering team.
The ideal candidate will have a strong foundation in web application development using Python and related frameworks. A passion for writing clean, scalable code and solving complex technical challenges is essential for success in this role.
Mandatory Skills : Python (3.x), FastAPI or Flask, PostgreSQL or Oracle, ORM, API Microservices, Agile Methodologies, Clean Code Practices.
Required Skills and Qualifications :
- 7+ Years of hands-on experience in Python (3.x) development.
- Strong proficiency in FastAPI or Flask frameworks.
- Experience with relational databases like PostgreSQL, Oracle, or similar, along with ORM tools.
- Demonstrated experience in building and maintaining API-based microservices.
- Solid grasp of Agile development methodologies and version control practices.
- Strong analytical and problem-solving skills.
- Ability to write clean, maintainable, and well-documented code.
Nice to Have :
- Experience with Google Cloud Platform (GCP) or other cloud providers.
- Exposure to Kubernetes and container orchestration tools.


About the CryptoXpress Partner Program
Earn lifetime income just by liking posts, posting memes, art, simple threads, engaging on Twitter, Quora, Reddit, or Instagram, referral signups, commission from transactions like flight, hotel, trade, gift card etc.,
(Apply link at the bottom)
More Details:
- Student Partner Program - https://cryptoxpress.com/student-partner-program
- Ambassador Program - https://cryptoxpressambassadors.com
CryptoXpress has built two powerful tracks to help students gain experience, earn income, and launch real careers:
🌱 Growth Partner: Bring in new users, grow the network, and earn lifetime income from your referrals' transactions like trades, investments, flight/hotel/gift card purchases.
🎯 CX Ambassador: Complete creative tasks, support the brand, and get paid by liking posts, creating simple threads, memes, art, sharing your experience, and engaging on Twitter, Quora, Reddit, or Instagram.
Participants will be rewarded with payments, internship certificates, mentorship, certified Web3 learning and career opportunities.
About the Role
CryptoXpress is looking for a skilled Backend Engineer to build the core logic powering our Partner Program reward engines, task pipelines, and content validation systems. Your work will directly impact how we scale fair, fast, and fraud-proof systems for global Student Partners and CX Ambassadors.
Key Responsibilities
- Design APIs to handle submission, review, and payout logic
- Develop XP, karma, and level-up algorithms with fraud resistance
- Create content verification checkpoints (e.g., metadata checks, submission throttles)
- Handle rate limits, caching, retries, and fallback for reward processing
- Collaborate with AI and frontend engineers for seamless data flow
- debug reward or submission logic
- fix issues in task flows or XP systems
- patch verification bugs or payout edge cases
- optimize performance and API stability
Skills & Qualifications
- Proficient in Node.js, Python (Flask/FastAPI), or Go
- Solid understanding of PostgreSQL, Firebase, or equivalent databases
- Strong grasp of authentication, role-based permissions, and API security
- Bonus: Experience with reward engines, affiliate logic, or task-based platforms
- Bonus: Familiarity with moderation tooling or content scoring
Join us and play a key role in driving the growth of CryptoXpress in the cryptocurrency space!
Pro Tip: Tips for Application Success
- Please fill out the application below
- Explore CryptoXpress before applying, take 2 minutes to download and try the app so you understand what we're building
- Show your enthusiasm for crypto, travel, and digital innovation
- Mention any self-learning initiatives or personal crypto experiments
- Be honest about what you don't know - we value growth mindsets
How to Apply:
Interested candidates must complete the application form at


About the Role
At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution.
Our flagship platform is currently under development. As a Backend Engineer, you will play a foundational role in designing and building the core trading engine and research infrastructure from the ground up. Your work will focus on developing performance-critical components that power backtesting, real-time strategy execution, and seamless integration with brokers and data providers. You’ll be responsible for bridging core engine logic with Python-based strategy interfaces, supporting a modular system architecture for isolated and scalable strategy execution, and building robust abstractions for data handling and API interactions. This role is central to delivering the reliability, flexibility, and performance that our users will rely on in fast-moving financial markets.
We are a remote-first team and are open to hiring exceptional candidates globally.
Core Tasks
· Build and maintain the trading engine core for execution, backtesting, and event logging.
· Develop isolated strategy execution runners to support multi-user, multi-strategy environments.
· Implement abstraction layers for brokers and market data feeds to offer a unified API experience.
· Bridge the core engine language with Python strategies using gRPC, ZeroMQ, or similar interop technologies.
· Implement logic to parse and execute JSON-based strategy DSL from the strategy builder.
· Design compute-optimized components for multi-asset workflows and scalable backtesting.
· Capture real-time state, performance metrics, and slippage for both live and simulated runs.
· Collaborate with infrastructure engineers to support high-availability deployments.
Top Technical Competencies
· Proficiency in distributed systems, concurrency, and system design.
· Strong backend/server-side development skills using C++, Rust, C#, Erlang, or Python.
· Deep understanding of data structures and algorithms with a focus on low-latency performance.
· Experience with event-driven and messaging-based architectures (e.g., ZeroMQ, Redis Streams).
· Familiarity with Linux-based environments and system-level performance tuning.
Bonus Competencies
· Understanding of financial markets, asset classes, and algorithmic trading strategies.
· 3–5 years of prior Backend experience.
· Hands-on experience with backtesting frameworks or financial market simulators.
· Experience with sandboxed execution environments or paper trading platforms.
· Advanced knowledge of multithreading, memory optimization, or compiler construction.
· Educational background from Tier-I or Tier-II institutions with strong computer science fundamentals, a passion for scalable system design, and a drive to build cutting-edge fintech infrastructure.
What We Offer
· Opportunity to shape the backend architecture of a next-gen fintech startup.
· A collaborative, technically driven culture.
· Competitive compensation with performance-based bonuses.
· Flexible working hours and a remote-friendly environment for candidates across the globe.
· Exposure to financial modeling, trading infrastructure, and real-time applications.
· Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna.
Ideal Candidate
You’re a backend-first thinker who’s obsessed with reliability, latency, and architectural flexibility. You enjoy building scalable systems that transform complex strategy logic into high-performance, real-time trading actions. You think in microseconds, architect for fault tolerance, and build APIs designed for developer extensibility.

Founding Engineer - LITMAS
About LITMAS
LITMAS is revolutionizing litigation with the first AI-powered platform built specifically for elite litigators. We're transforming how attorneys research, strategize, draft, and win cases by combining comprehensive case repositories with cutting-edge AI validation and workflow automation. We are a team incubated by experienced litigators, building the future of legal technology.
The Opportunity
We're seeking a Founding Engineer to join our core team and shape the technical foundation of LITMAS. This is a rare opportunity to build a category-defining product from the ground up, working directly with the founders to create technology that will transform the US litigation market.
As a founding engineer, you'll have significant ownership over our technical architecture, product decisions, and company culture. Your code will directly impact how thousands of attorneys practice law.
What You'll Do
- Architect and build core platform features using Python, Node.js, Next.js, React, and MongoDB
- Design and implement production-grade LLM systems with advanced tool usage, RAG pipelines, and agent architectures
- Build AI workflows that combine multiple tools for legal research, validation, and document analysis
- Create scalable RAG infrastructure to handle thousands of legal documents with high accuracy
- Implement AI tool chains to provide agents tool inputs
- Design intuitive interfaces that make complex legal workflows simple and powerful
- Own end-to-end features from conception through deployment and iteration
- Establish engineering best practices for AI systems including evaluation, monitoring, and safety
- Collaborate directly with founders on product strategy and technical roadmap
The Ideal Candidate
You're not just an AI engineer, you're someone who understands how to build reliable, production-grade AI systems that users can trust. You've wrestled with RAG accuracy, tool reliability, and LLM hallucinations in production. You know the difference between a demo and a system that handles real-world complexity. You're excited about applying AI to transform how legal professional’s work.
What We're Looking For
Must-Haves
- Deployed production-grade LLM applications with demonstrable experience in:
- Tool usage and function calling
- RAG (Retrieval-Augmented Generation) implementation at scale
- Agent architectures and multi-step reasoning
- Prompt engineering and optimization
- Knowledge of multiple LLM providers (OpenAI, Anthropic, Cohere, open-source models)
- Background in building AI evaluation and monitoring systems
- Experience with document processing and OCR technologies
- 3+ years of production experience with Node.js, Python, Next.js, and React
- Strong MongoDB expertise including schema design and optimization
- Experience with vector databases (Pinecone, Weaviate, Qdrant, or similar)
- Full-stack mindset with ability to own features from database to UI
- Track record of shipping complex web applications at scale
- Deep understanding of LLM limitations, hallucination prevention, and validation techniques
Tech Stack
- Backend: Node.js, Express, MongoDB
- Frontend: Next.js, React, TypeScript, Modern CSS
- AI/ML: LangChain/LlamaIndex, OpenAI/Anthropic APIs, vector databases, custom AI tools
- Additional: Document processing, search infrastructure, real-time collaboration
What We Offer
- Significant equity stake true ownership in the company you're building
- Competitive compensation commensurate with experience
- Direct impact your decisions shape the product and company
- Learning opportunity work with cutting-edge AI and legal technology
- Flexible work remote-first with a global team
- AI resources access to latest models and compute resources
Interview Process
One more thing: Our process includes deep technical interviews and fit conversations. As part of the evaluation, there will be an extensive take-home test that should expect to take at least 4-5 hours depending on your skill level. This allows us to see how you approach real problems similar to what you'll encounter at LITMAS.

Job Title: BigID Deployment Lead/ SME
Duration: 6+ Months
Exp. Level: 8-12yrs
Job Summary:
We are seeking a highly skilled and experienced BigID Deployment Lead / Subject Matter Expert (SME) to lead the implementation, configuration, and optimization of BigID's data intelligence platform. The ideal candidate will have deep expertise in data discovery, classification, privacy, and governance, and will play a pivotal role in ensuring successful deployment and integration of BigID solutions across enterprise environments.
Key Responsibilities:
Lead end-to-end deployment and configuration of BigID solutions in complex enterprise environments.
Serve as the primary SME for BigID, advising stakeholders on best practices, architecture, and integration strategies.
Collaborate with cross-functional teams including security, compliance, data governance, and IT to align BigID capabilities with business requirements.
Customize and fine-tune BigID policies, connectors, and scanning configurations to meet data privacy and compliance objectives (e.g., GDPR, CCPA, HIPAA).
Conduct workshops, training sessions, and knowledge transfers for internal teams and clients.
Troubleshoot and resolve technical issues related to BigID deployment, performance, and data discovery.
Stay current with BigID product updates, industry trends, and regulatory changes to ensure continuous improvement and compliance.
Required Qualifications:
Bachelor's or Master's degree in Computer Science, Information Technology, Cybersecurity, or a related field.
5+ years of experience in data governance, privacy, or security domains.
2+ years of hands-on experience with BigID platform deployment and configuration.
Strong understanding of data classification, metadata management, and data mapping.
Experience with cloud platforms (AWS, Azure, GCP) and integrating BigID with cloud-native services.
Familiarity with data privacy regulations (GDPR, CCPA, etc.) and risk management frameworks.
Excellent communication, documentation, and stakeholder management skills.
Preferred Qualifications:
BigID certification(s) or formal training.
Experience with scripting (Python, PowerShell) and API integrations.
Background in enterprise data architecture or data security.
- Experience working in Agile/Scrum environments.

Description
Job Description:
Company: Springer Capital
Type: Internship (Remote, Part-Time/Full-Time)
Duration: 3–6 months
Start Date: Rolling
Compensation:
About the role:
We’re building high-performance backend systems that power our financial and ESG intelligence platforms and we want you on the team. As a Backend Engineering Intern, you’ll help us develop scalable APIs, automate data pipelines, and deploy secure cloud infrastructure. This is your chance to work alongside experienced engineers, contribute to real products, and see your code go live.
What You'll Work On:
As a Backend Engineering Intern, you’ll be shaping the systems that power financial insights.
Engineering scalable backend services in Python, Node.js, or Go
Designing and integrating RESTful APIs and microservices
Working with PostgreSQL, MongoDB, or Redis for data persistence
Deploying on AWS/GCP, using Docker, and learning Kubernetes on the fly
Automating infrastructure and shipping faster with CI/CD pipelines
Collaborating with a product-focused team that values fast iteration
What We’re Looking For:
A builder mindset – you like writing clean, efficient code that works
Strong grasp of backend languages (Python, Java, Node, etc.)
Understanding of cloud platforms and containerization basics
Basic knowledge of databases and version control
Students or self-taught engineers actively learning and building
Preferred skills:
Experience with serverless or event-driven architectures
Familiarity with DevOps tools or monitoring systems
A curious mind for AI/ML, fintech, or real-time analytics
What You’ll Get:
Real-world experience solving core backend problems
Autonomy and ownership of live features
Mentorship from engineers who’ve built at top-tier startups
A chance to grow into a full-time offer



About the Role:
We are looking for a Senior Technical Customer Success Manager to join our growing team. This is a client-facing role focused on ensuring successful adoption and value realization of our SaaS solutions. The ideal candidate will come from a strong analytics background, possess hands-on skills in SQL and Python or R, and have experience working with dashboarding tools. Prior experience in eCommerce or retail domains is a strong plus.
Responsibilities:
- Own post-sale customer relationship and act as the primary technical point of contact.
- Drive product adoption and usage through effective onboarding, training, and ongoing support.
- Work closely with clients to understand business goals and align them with product capabilities.
- Collaborate with internal product, engineering, and data teams to deliver solutions and enhancements tailored to client needs.
- Analyze customer data and usage trends to proactively identify opportunities and risks.
- Build dashboards or reports for customers using internal tools or integrations.
- Lead business reviews, share insights, and communicate value delivered.
- Support customers in configuring rules, data integrations, and troubleshooting issues.
- Drive renewal and expansion by ensuring customer satisfaction and delivering measurable outcomes.
Requirements:
- 7+ years of experience in a Customer Success, Technical Account Management, or Solution Consulting role in a SaaS or software product company.
- Strong SQL skills and working experience with Python or R.
- Experience with dashboarding tools such as Tableau, Power BI, Looker, or similar.
- Understanding of data pipelines, APIs, and data modeling.
- Excellent communication and stakeholder management skills.
- Proven track record of managing mid to large enterprise clients.
- Experience in eCommerce, retail, or consumer-facing businesses is highly desirable.
- Ability to translate technical details into business context and vice versa.
- Bachelor’s or Master’s degree in Computer Science, Analytics, Engineering, or related field.
Nice to Have:
- Exposure to machine learning workflows, recommendation systems, or pricing analytics.
- Familiarity with cloud platforms (AWS/GCP/Azure).
- Experience working with cross-functional teams in Agile environments.

Required Skills:
• Basic understanding of machine learning concepts and algorithms
• Proficiency in Python and relevant libraries (NumPy, Pandas, scikit-learn)
• Familiarity with data preprocessing techniques
• Knowledge of basic statistical concepts
• Understanding of model evaluation metrics
• Basic experience with at least one deep learning framework (TensorFlow, PyTorch)
• Strong analytical and problem-solving abilities
Application Process: Create your profile on our platform, submit your portfolio, GitHub profile, or sample projects.

Primary skill set: QA Automation, Python, BDD, SQL
As Senior Data Quality Engineer you will:
- Evaluate product functionality and create test strategies and test cases to assess product quality.
- Work closely with the on-shore and the offshore team.
- Work on multiple reports validation against the databases by running medium to complex SQL queries.
- Better understanding of Automation Objects and Integrations across various platforms/applications etc.
- Individual contributor exploring opportunities to improve performance and suggest/articulate the areas of improvements importance/advantages to management.
- Integrate with SCM infrastructure to establish a continuous build and test cycle using CICD tools.
- Comfortable working on Linux/Windows environment(s) and Hybrid infrastructure models hosted on Cloud platforms.
- Establish processes and tools set to maintain automation scripts and generate regular test reports.
- Peer review to provide feedback and to make sure the test scripts are flaw-less.
Core/Must have skills:
- Excellent understanding and hands on experience in ETL/DWH testing preferably DataBricks paired with Python experience.
- Hands on experience SQL (Analytical Functions and complex queries) along with knowledge of using SQL client utilities effectively.
- Clear & crisp communication and commitment towards deliverables
- Experience on BigData Testing will be an added advantage.
- Knowledge on Spark and Scala, Hive/Impala, Python will be an added advantage.
Good to have skills:
- Test automation using BDD/Cucumber / TestNG combined with strong hands-on experience with Java with Selenium. Especially working experience in WebDriver.IO
- Ability to effectively articulate technical challenges and solutions
- Work experience in qTest, Jira, WebDriver.IO



Title - Pncpl Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
Principal Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React. And suggest optimisations based on them
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 8-10 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.

About Us:
MyOperator and Heyo are India’s leading conversational platforms empowering 40,000+ businesses with Call and WhatsApp-based engagement. We’re a product-led SaaS company scaling rapidly, and we’re looking for a skilled Software Developer to help build the next generation of scalable backend systems.
Role Overview:
We’re seeking a passionate Python Developer with strong experience in backend development and cloud infrastructure. This role involves building scalable microservices, integrating AI tools like LangChain/LLMs, and optimizing backend performance for high-growth B2B products.
Key Responsibilities:
- Develop robust backend services using Python, Django, and FastAPI
- Design and maintain scalable microservices architecture
- Integrate LangChain/LLMs into AI-powered features
- Write clean, tested, and maintainable code with pytest
- Manage and optimize databases (MySQL/Postgres)
- Deploy and monitor services on AWS
- Collaborate across teams to define APIs, data flows, and system architecture
Must-Have Skills:
- Python and Django
- MySQL or Postgres
- Microservices architecture
- AWS (EC2, RDS, Lambda, etc.)
- Unit testing using pytest
- LangChain or Large Language Models (LLM)
- Strong grasp of Data Structures & Algorithms
- AI coding assistant tools (e.g., Chat GPT & Gemini)
Good to Have:
- MongoDB or ElasticSearch
- Go or PHP
- FastAPI
- React, Bootstrap (basic frontend support)
- ETL pipelines, Jenkins, Terraform
Why Join Us?
- 100% Remote role with a collaborative team
- Work on AI-first, high-scale SaaS products
- Drive real impact in a fast-growing tech company
- Ownership and growth from day one


We are looking for a hands-on technical expert who has worked with multiple technology stacks and has experience architecting and building scalable cloud solutions with web and mobile frontends.
What will you work on?
- Interface with clients
- Recommend tech stacks
- Define end-to-end logical and cloud-native architectures
- Define APIs
- Integrate with 3rd party systems
- Create architectural solution prototypes
- Hands-on coding, team lead, code reviews, and problem-solving
What Makes You A Great Fit?
- 5+ years of software experience
- Experience with architecture of technology systems having hands-on expertise in backend, and web or mobile frontend
- Solid expertise and hands-on experience in Python with Flask or Django
- Expertise on one or more cloud platforms (AWS, Azure, Google App Engine)
- Expertise with SQL and NoSQL databases (MySQL, Mongo, ElasticSearch, Redis)
- Knowledge of DevOps practices
- Chatbot, Machine Learning, Data Science/Big Data experience will be a plus
- Excellent communication skills, verbal and written About Us We offer CTO-as-a-service and Product Development for Startups. We value our employees and provide them an intellectually stimulating environment where everyone’s ideas and contributions are valued.


LendFlow is an AI-powered home loan assessment platform that helps mortgage brokers and lenders save hours by automating document analysis, income validation, and serviceability assessment. We turn complex financial documents into clear insights—fast.
We’re building a smart assistant that ingests client docs (bank statements, payslips, loan summaries) and uses modular AI agents to extract, classify, and summarize financial data in minutes, not hours. Think OCR + AI agents + compliance-ready outputs.
🛠️ What You’ll Be Building
As part of our early technical team, you’ll help us develop and launch our MVP. Key modules include:
- Document ingestion and OCR processing (Textract, Document AI)
- AI agent workflows using LangChain or CrewAI
- Serviceability calculators with business rule engines
- React + Next.js frontend for brokers and analysts
- FastAPI backend with PostgreSQL
- Security, encryption, audit logging (privacy-first design)
🎯 We’re Looking For:
Must-Have Skills:
- Strong experience with Python (FastAPI, OCR, LLMs, prompt engineering)
- Familiarity with AI agent frameworks (LangChain, CrewAI, Autogen, or similar)
- Frontend skills in React.js / Next.js
- Experience with PostgreSQL and cloud storage (AWS/GCP)
- Understanding of financial documents and data privacy best practices
Bonus Points:
- Experience with OCR tools like Amazon Textract, Tesseract, or Document AI
- Building ML/NLP pipelines in real-world apps
- Prior work in fintech, lending, or proptech sectors

About Us: Certa is an emerging leader in the fast-growing Enterprise Workflow Automation industry with advanced “no-code” SaaS workflow solutions. Our platform addresses the entire lifecycle for Suppliers/Third-parties/Partners covering onboarding, risk assessment, contract lifecycle management, and ongoing monitoring. Certa offers the most automated and "ridiculously" configurable solutions disrupting the customer/counterparty KYC & AML space. The Certa platform brings business functions like Procurement, Sales, Compliance, Legal, InfoSec, Privacy, etc., together via an easy collaborative workflow, automated risk scoring, and ongoing monitoring for key ‘shifts in circumstance’. Our data-agnostic, open-API platform ensures that Clients can take a best-in-class approach when leveraging any of our 80+ (& growing) existing data and tech partner integrations. As a result, Certa enables clients to onboard Third Parties & KYC customers faster, with less effort, with no swivel chair syndrome and maintains a constantly updated searchable knowledge repository of all records.
Certa’s clients range from the largest & leading global firms in their Industry (Retail, Aerospace, Payments, Consulting, Ridesharing, and Commercial Data) to mid-stage start-ups.
Responsibilities:
As a Solutions Engineer at our technology product company, you will play a critical role in ensuring the successful integration and customisation of our product offerings for clients. Your primary responsibilities will involve configuring our software solutions to meet our client's unique requirements and business use cases. Additionally, you will be heavily involved in API integrations to enable seamless data flow and connectivity between our products and various client systems.
- Client Requirement Analysis: Collaborate with the sales and client-facing teams to understand client needs, business use cases, and specific requirements for implementing our technology products.
- Product Configuration: Utilize your technical expertise to configure and customise our software solutions according to the identified client needs and business use cases. This may involve setting up workflows, defining data structures, and enabling specific features or functionalities.
- API Integration: Work closely with the development and engineering teams to design, implement, and manage API integrations with external systems, ensuring smooth data exchange and interoperability.
- Solution Design: Participate in solution design discussions with clients and internal stakeholders, providing valuable insights and recommendations based on your understanding of the technology and the business domain.
- Troubleshooting: Identify and resolve configuration-related issues and challenges that arise during the implementation and integration process, ensuring the smooth functioning of the product.
- Documentation: Create and maintain detailed documentation of configurations, customisations, and integration processes to facilitate knowledge sharing within the organisation and with clients.
- Quality Assurance: Conduct thorough unit testing of configurations and integrations to verify that they meet the defined requirements and perform as expected.
- Client Support: Provide support and guidance to clients during the onboarding and post-implementation phases, assisting them with any questions or concerns related to configuration and integration.
- Continuous Improvement: Stay up-to-date with the latest product features, industry trends, and best practices in configuration and integration, and proactively suggest improvements to enhance the overall efficiency and effectiveness of the process.
- Cross-Functional Collaboration: Work closely with different teams, including product management, engineering, marketing, and sales, to align product development with business goals and customer needs.
- Product Launch and Support*: Assist in the product launch by providing technical support, conducting training sessions, and addressing customer inquiries. Collaborate with customer support teams to troubleshoot and resolve complex technical issues.
Requirements :
- 3 - 5 Years in a similar capacity with a proven track record of Implementation excellence working with Medium to large enterprise customers
- Strong analytical skills with the ability to grasp complex business use cases and translate them into technical solutions.
- Bachelor’s Degree Required with a preference for Engineering or equivalent.
- Practical experience working on ERP integrations, process documentation and requirements-gathering tools like MIRO or VISIO is a plus.
- Proficiency in API integration and understanding of RESTful APIs and web services.
- Technical expertise in relevant programming languages and platforms related to the technology product.
- Exceptional communication skills to interact with clients, understand their requirements, and explain technical concepts clearly and concisely.
- Results-oriented and inherently curious mindset capable of influencing internal and external partners to drive priorities and outcomes.
- Independent operator capable of taking limited direction and applying the best action.
- Excellent communication, presentation, negotiation, and interpersonal skills.
- Ability to create structure in ambiguous situations and design effective processes.
- Experience with JSON and SaaS Products is a plus.
- Location: Hires Remotely Everywhere
- Job Type: Full Time
- Experience: 3 - 5 years
- Languages: Excellent command of the English Language

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML, and advanced decision-making capabilities to drive real-time business insights. Built from the ground up using modern technologies, Hypersonix simplifies data consumption for customers across various industry verticals. We are seeking a well-rounded, hands-on product leader to help manage key capabilities and features in our platform.
Position Overview
We are seeking a highly skilled Web Scraping Architect to join our team. The successful candidate will be responsible for designing, implementing, and maintaining web scraping processes to gather data from various online sources efficiently and accurately. As a Web Scraping Specialist, you will play a crucial role in collecting data for competitor analysis and other business intelligence purposes.
Responsibilities
- Scalability/Performance: Lead and provide expertise in scraping at scale e-commerce marketplaces.
- Data Source Identification: Identify relevant websites and online sources from which data needs to be scraped. Collaborate with the team to understand data requirements and objectives.
- Web Scraping Design: Develop and implement effective web scraping strategies to extract data from targeted websites. This includes selecting appropriate tools, libraries, or frameworks for the task.
- Data Extraction: Create and maintain web scraping scripts or programs to extract the required data. Ensure the code is optimized, reliable, and can handle changes in the website's structure.
- Data Cleansing and Validation: Cleanse and validate the collected data to eliminate errors, inconsistencies, and duplicates. Ensure data integrity and accuracy throughout the process.
- Monitoring and Maintenance: Continuously monitor and maintain the web scraping processes. Address any issues that arise due to website changes, data format modifications, or anti-scraping mechanisms.
- Scalability and Performance: Optimize web scraping procedures for efficiency and scalability, especially when dealing with a large volume of data or multiple data sources.
- Compliance and Legal Considerations: Stay up-to-date with legal and ethical considerations related to web scraping, including website terms of service, copyright, and privacy regulations.
- Documentation: Maintain detailed documentation of web scraping processes, data sources, and methodologies. Create clear and concise instructions for others to follow.
- Collaboration: Collaborate with other teams such as data analysts, developers, and business stakeholders to understand data requirements and deliver insights effectively.
- Security: Implement security measures to ensure the confidentiality and protection of sensitive data throughout the scraping process.
Requirements
- Proven experience of 7+ years as a Web Scraping Specialist or similar role, with a track record of successful web scraping projects
- Expertise in handling dynamic content, user-agent rotation, bypassing CAPTCHAs, rate limits, and use of proxy services
- Knowledge of browser fingerprinting
- Has leadership experience
- Proficiency in programming languages commonly used for web scraping, such as Python, BeautifulSoup, Scrapy, or Selenium
- Strong knowledge of HTML, CSS, XPath, and other web technologies relevant to web scraping and coding
- Knowledge and experience in best-of-class data storage and retrieval for large volumes of scraped data
- Understanding of web scraping best practices, including handling dynamic content, user-agent rotation, and IP address management
- Attention to detail and ability to handle and process large volumes of data accurately
- Familiarity with data cleansing techniques and data validation processes
- Good communication skills and ability to collaborate effectively with cross-functional teams
- Knowledge of web scraping ethics, legal considerations, and compliance with website terms of service
- Strong problem-solving skills and adaptability to changing web environments
Preferred Qualifications
- Bachelor’s degree in Computer Science, Data Science, Information Technology, or related fields
- Experience with cloud-based solutions and distributed web scraping systems
- Familiarity with APIs and data extraction from non-public sources
- Knowledge of machine learning techniques for data extraction and natural language processing is desired but not mandatory
- Prior experience in handling large-scale data projects and working with big data frameworks
- Understanding of various data formats such as JSON, XML, CSV, etc.
- Experience with version control systems like Git

About the Company
Hypersonix.ai is disrupting the e-commerce space with AI, ML and advanced decision capabilities to drive real-time business insights. Hypersonix.ai has been built ground up with new age technology to simplify the consumption of data for our customers in various industry verticals. Hypersonix.ai is seeking a well-rounded, hands-on product leader to help lead product management of key capabilities and features.
About the Role
We are looking for talented and driven Data Engineers at various levels to work with customers to build the data warehouse, analytical dashboards and ML capabilities as per customer needs.
Roles and Responsibilities
- Create and maintain optimal data pipeline architecture
- Assemble large, complex data sets that meet functional / non-functional business requirements; should write complex queries in an optimized way
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies
- Run ad-hoc analysis utilizing the data pipeline to provide actionable insights
- Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions
- Work with analytics and data scientist team members and assist them in building and optimizing our product into an innovative industry leader
Requirements
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
- Experience building and optimizing ‘big data’ data pipelines, architectures and data sets
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement
- Strong analytic skills related to working with unstructured datasets
- Build processes supporting data transformation, data structures, metadata, dependency and workload management
- A successful history of manipulating, processing and extracting value from large disconnected datasets
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores
- Experience supporting and working with cross-functional teams in a dynamic environment
- We are looking for a candidate with 7+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Information Technology or completed MCA.


We’re building a powerful, AI-driven communication platform — a next-generation alternative to RingCentral or 8x8 — powered by OpenAI, LangChain, and SIP/WebRTC. We're looking for a Full-Stack Software Developer who’s passionate about building real-time, AI-enabled voice infrastructure and who’s excited to work in a fast-moving, founder-led environment.
This is an opportunity to build from scratch, take ownership of core systems, and innovate on the edge of VoIP + AI.
What You’ll Do
- Design and build AI-driven voice and messaging features (e.g. smart IVRs, call transcription, virtual agents)
- Develop backend services using Python, Node.js, or Golang
- Integrate OpenAI, Whisper, and LangChain with real-time VoIP systems like Twilio, SIP, or WebRTC
- Create scalable APIs, handle call logic, and build AI pipelines
- Collaborate with the founder and early team on product strategy and infrastructure
- Participate in occasional in-person strategy meetings (Delhi, Bangalore, or nearby)
Must-Have Skills
- Strong programming experience in Python, Node.js, or Go
- Hands-on experience with VoIP/SIP, WebRTC, or tools like Twilio, Asterisk, Plivo
- Experience integrating with LLM APIs, OpenAI, or speech-to-text models
- Solid understanding of backend design, Docker, Redis, PostgreSQL
- Ability to work independently and deliver production-grade code
Nice to Have
- Familiarity with LangChain or agent-based AI systems
- Knowledge of call routing logic, STUN/TURN, or media servers (e.g. FreeSWITCH)
- Interest in building scalable cloud-first SaaS products
Work Setup
- 🏠 Remote work (India-based, must be reachable for meetings)
- 🕐 Full-time role
- 💼 Direct collaboration with founder (technical)
- 🧘♂️ Flexible hours, strong ownership culture

Senior Generative AI Engineer
Job Id: QX016
About Us:
The QX impact was launched with a mission to make AI accessible and affordable and deliver AI Products/Solutions at scale for the enterprises by bringing the power of Data, AI, and Engineering to drive digital transformation. We believe without insights; businesses will continue to face challenges to better understand their customers and even lose them.
Secondly, without insights businesses won't’ be able to deliver differentiated products/services; and finally, without insights, businesses can’t achieve a new level of “Operational Excellence” is crucial to remain competitive, meeting rising customer expectations, expanding markets, and digitalization.
Job Summary:
We seek a highly experienced Senior Generative AI Engineer who focus on the development, implementation, and engineering of Gen AI applications using the latest LLMs and frameworks. This role requires hands-on expertise in Python programming, cloud platforms, and advanced AI techniques, along with additional skills in front-end technologies, data modernization, and API integration. The Senior Gen AI engineer will be responsible for building applications from the ground up, ensuring robust, scalable, and efficient solutions.
Responsibilities:
· Build GenAI solutions such as virtual assistant, data augmentation, automated insights and predictive analytics
· Design, develop, and fine-tune generative AI models (GANs, VAEs, Transformers).
· Handle data preprocessing, augmentation, and synthetic data generation.
· Work with NLP, text generation, and contextual comprehension tasks.
· Develop backend services using Python or .NET for LLM-powered applications.
· Build and deploy AI applications on cloud platforms (Azure, AWS, GCP).
· Optimize AI pipelines and ensure scalability.
· Stay updated with advancements in AI and ML.
Skills & Requirements:
- Strong knowledge of machine learning, deep learning, and NLP.
- Proficiency in Python, TensorFlow, PyTorch, and Keras.
- Experience with cloud services, containerization (Docker, Kubernetes), and AI model deployment.
- Understanding of LLMs, embeddings, and retrieval-augmented generation (RAG).
- Ability to work independently and as part of a team.
- Bachelor’s degree in Computer Science, Mathematics, Engineering, or a related field.
- 6+ years of experience in Gen AI, or related roles.
- Experience with AI/ML model integration into data pipelines.
Core Competencies for Generative AI Engineers:
1. Programming & Software Development
a. Python – Proficiency in writing efficient and scalable code with strong knowledge with NumPy, Pandas, TensorFlow, PyTorch and Scikit-learn.
b. LLM Frameworks – Experience with Hugging Face Transformers, LangChain, OpenAI API, and similar tools for building and deploying large language models.
c. API integration such as FastAPI, Flask, RESTful API, WebSockets or Django.
d. Knowledge of Version Control, containerization, CI/CD Pipelines and Unit Testing.
2. Vector Database & Cloud AI Solutions
a. Pinecone, FAISS, ChromaDB, Neo4j
b. Azure Redis/ Cognitive Search
c. Azure OpenAI Service
d. Azure ML Studio Models
e. AWS (Relevant Services)
3. Data Engineering & Processing
- Handling large-scale structured & unstructured datasets.
- Proficiency in SQL, NoSQL (PostgreSQL, MongoDB), Spark, and Hadoop.
- Feature engineering and data augmentation techniques.
4. NLP & Computer Vision
- NLP: Tokenization, embeddings (Word2Vec, BERT, T5, LLaMA).
- CV: Image generation using GANs, VAEs, Stable Diffusion.
- Document Embedding – Experience with vector databases (FAISS, ChromaDB, Pinecone) and embedding models (BGE, OpenAI, SentenceTransformers).
- Text Summarization – Knowledge of extractive and abstractive summarization techniques using models like T5, BART, and Pegasus.
- Named Entity Recognition (NER) – Experience in fine-tuning NER models and using pre-trained models from SpaCy, NLTK, or Hugging Face.
- Document Parsing & Classification – Hands-on experience with OCR (Tesseract, Azure Form Recognizer), NLP-based document classifiers, and tools like LayoutLM, PDFMiner.
5. Model Deployment & Optimization
- Model compression (quantization, pruning, distillation).
- Deployment using Azure CI/CD, ONNX, TensorRT, OpenVINO on AWS, GCP.
- Model monitoring (MLflow, Weights & Biases) and automated workflows (Azure Pipeline).
- API integration with front-end applications.
6. AI Ethics & Responsible AI
- Bias detection, interpretability (SHAP, LIME), and security (adversarial attacks).
7. Mathematics & Statistics
- Linear Algebra, Probability, and Optimization (Gradient Descent, Regularization, etc.).
8. Machine Learning & Deep Learning
a. Expertise in supervised, unsupervised, and reinforcement learning.
a. Proficiency in TensorFlow, PyTorch, and JAX.
b. Experience with Transformers, GANs, VAEs, Diffusion Models, and LLMs (GPT, BERT, T5).
Personal Attributes:
- Strong problem-solving skills with a passion for data architecture.
- Excellent communication skills with the ability to explain complex data concepts to non-technical stakeholders.
- Highly collaborative, capable of working with cross-functional teams.
- Ability to thrive in a fast-paced, agile environment while managing multiple priorities effectively.
Why Join Us?
- Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions.
- Work on impactful projects that make a difference across industries.
- Opportunities for professional growth and continuous learning.
- Competitive salary and benefits package.
Ready to make an impact? Apply today and become part of the QX impact team!


Role Overview
We are seeking a skilled Odoo Consultant with Python development expertise to support the design, development, and implementation of Odoo-based business solutions for our clients. The consultant will work on module customization, backend logic, API integrations, and configuration of business workflows using the Odoo framework.
Key Responsibilities
● Customize and extend Odoo modules based on client requirements
● Develop backend logic using Python and the Odoo ORM
● Configure business workflows, access rights, and approval processes
● Create and update views using XML and QWeb for reports and screens
● Integrate third-party systems using Odoo APIs (REST, XML-RPC)
● Participate in client discussions and translate business needs into technical solutions
● Support testing, deployment, and user training as required
Required Skills
● Strong knowledge of Python and Odoo framework (v12 and above)
● Experience working with Odoo models, workflows, and security rules
● Good understanding of XML, QWeb, and PostgreSQL
● Experience in developing or integrating APIs
● Familiarity with Git and basic Linux server operations
● Good communication and documentation skills
Preferred Qualifications
● Experience in implementing Odoo for industries such as manufacturing, retail, financial
services, or real estate
● Ability to work independently and manage project timelines
● Bachelor’s degree in Computer Science, Engineering, or related field


We are seeking a visionary and hands-on AI/ML and Chatbot Lead to spearhead the design, development, and deployment of enterprise-wide Conversational and Generative AI solutions. This role will be instrumental in establishing and scaling our AI Lab function, defining chatbot and multimodal AI strategies, and delivering intelligent automation solutions that enhance user engagement and operational efficiency.
Key Responsibilities
- Strategy & Leadership
- Define and lead the enterprise-wide strategy for Conversational AI, Multimodal AI, and Large Language Models (LLMs).
- Establish and scale an AI/Chatbot Lab, with a clear roadmap for innovation across in-app, generative, and conversational AI use cases.
- Lead, mentor, and scale a high-performing team of AI/ML engineers and chatbot developers.
- Architecture & Development
- Architect scalable AI/ML systems encompassing presentation, orchestration, AI, and data layers.
- Build multi-turn, memory-aware conversations using frameworks like LangChain or Semantic Kernel.
- Integrate chatbots with enterprise platforms such as Salesforce, NetSuite, Slack, and custom applications via APIs/webhooks.
- Solution Delivery
- Collaborate with business stakeholders to assess needs, conduct ROI analyses, and deliver high-impact AI solutions.
- Identify and implement agentic AI capabilities and SaaS optimization opportunities.
- Deliver POCs, pilots, and MVPs, owning the full design, development, and deployment lifecycle.
- Monitoring & Governance
- Implement and monitor chatbot KPIs using tools like Kibana, Grafana, and custom dashboards.
- Champion ethical AI practices, ensuring compliance with governance, data privacy, and security standards.
Must-Have Skills
- Experience & Leadership
- 10+ years of experience in AI/ML with demonstrable success in chatbot, conversational AI, and generative AI implementations.
- Proven experience in building and operationalizing AI/Chatbot architecture frameworks across enterprises.
- Technical Expertise
- Programming: Python
- AI/ML Frameworks & Libraries: LangChain, ElasticSearch, spaCy, NLTK, Hugging Face
- LLMs & NLP: GPT, BERT, RAG, prompt engineering, PEFT
- Chatbot Platforms: Azure OpenAI, Microsoft Bot Framework, CLU, CQA
- AI Deployment & Monitoring at Scale
- Conversational AI Integration: APIs, webhooks
- Infrastructure & Platforms
- Cloud: AWS, Azure, GCP
- Containerization: Docker, Kubernetes
- Vector Databases: Pinecone, Weaviate, Qdrant
- Technologies: Semantic search, knowledge graphs, intelligent document processing
- Soft Skills
- Strong leadership and team management
- Excellent communication and documentation
- Deep understanding of AI governance, compliance, and ethical AI practices
Good-to-Have Skills
- Familiarity with tools like Glean, Perplexity.ai, Rasa, XGBoost
- Experience integrating with Salesforce, NetSuite, and understanding of Customer Success domain
- Knowledge of RPA tools like UiPath and its AI Center

About Us:
At Remedo, we're building the future of digital healthcare marketing. We help doctors grow their online presence, connect with patients, and drive real-world outcomes like higher appointment bookings and better Google reviews - all while improving their SEO.
But that's just the beginning.
We're also the creators of Convertlens, our generative Al-powered engagement engine that transforms how clinics interact with patients across the web. Think hyper-personalized messaging, automated conversion funnels, and insights that actually move the needle.
We're a lean, fast-moving team with startup DNA. If you like ownership, impact, and tech that solve real problems - you'll fit right in.
What You'll Do:
• Collaborate with product managers, designers, and other devs to ideate, build, and ship high-impact features
• Own full-stack development using Node.js, Next.js, and React.js
• Build fast, responsive front-ends with pixel-perfect execution
• Design and manage scalable back-end systems with MySQL/PostgreSQL
• Troubleshoot and resolve issues from live deployments with Ops team
• Contribute to documentation, internal tools, and process improvement
• Work on our generative Al tools and help scale Convertlens.
What You Bring:
• 2+ years of experience in a product/startup environment
• Strong foundation in Node.js, Next.js, and React.js
• Solid understanding of relational databases (MySQL, PostgreSQL)
• Fluency in modern JavaScript and the HTTP/REST ecosystem
• Comfortable with HTML, CSS, Git, and version control workflows
• Bonus: experience with Python or interest in working on Al-powered systems
• Great communication skills and a love for collaboration
• A builder mindset - scrappy, curious, and ready to ship
Perks & Culture:
• Flexible work setup remote-first for most, hybrid if you're in Delhi NCR
• A high-growth, high-impact environment where your code goes live fast
• Opportunities to work with cutting-edge tech including generative Al
• Small team, big vision your work truly matters here.
Join Us
If you're excited about building meaningful tech in a fast-moving startup let's talk.

Job Description For Associate Database Engineer (PostgreSQL)
Job Title: Associate Database Engineer (PostgreSQL)
Company: Mydbops
About us:
As a seasoned industry leader for 8 years in open-source database management, we specialize in providing
unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At
Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers.
Mydbops takes pride in being a PCI DSS-certified and ISO-certified company, reflecting our unwavering
commitment to maintaining the highest security and operational excellence standards.
Position Overview:
An Associate Database Engineer is responsible for the administration and monitoring of database systems and is
available to work in shifts.
Responsibilities
● Managing and maintaining various customer database environments.
● Monitoring proactively database performance using internal tools and metrics.
● Implementing backup and recovery procedures.
● Ensuring data security and integrity.
● Troubleshooting database issues with a focus on internal diagnostics.
● Assisting with capacity planning and system upgrades.
● This role requires a solid understanding of database management systems, proficiency in using internal tools for performance monitoring, and flexibility to work in various shifts to ensure continuous database support.
Requirements
● Good knowledge of Linux OS and its tools
● Strong expertise in PostgreSQL database administration
● Proficient in SQL and any programming languages (Python, bash)
● Hands-on experience with database backups, recovery, upgrades, replication and clustering
● Troubleshooting of database issues
● Familiarity with Cloud (AWS/GCP)
● Working knowledge of AWS RDS, Aurora, CloudSQL
● Strong communication skills
● Ability to work effectively in a team environment
Preferred Qualifications:
● B.Tech/M.Tech or any equivalent degree
● Deeper understanding of databases and Linux troubleshooting
● Working knowledge of upgrades and availability solutions
● Working knowledge of backup tools like pg backrest/barman
● Good knowledge of query optimisation and index types
● Experience with database monitoring and management tools.
● Certifications on PostgreSQL or related technologies are a plus
● Prior experience in customer support or technical operations
Why Join Us:
● Opportunity to work in a dynamic and growing industry.
● Learning and development opportunities to enhance your career.
● A collaborative work environment with a supportive team.
Job Details:
● Job Type: Full-time
● Work Mode: Work From Home
● Experience: 1-3 years


Role: Data Scientist
Location: Bangalore (Remote)
Experience: 4 - 15 years
Skills Required - Radiology, visual images, text, classical model, LLM multi model, Primarily Generative AI, Prompt Engineering, Large Language Models, Speech & Text Domain AI, Python coding, AI Skills, Real world evidence, Healthcare domain
JOB DESCRIPTION
We are seeking an experienced Data Scientist with a proven track record in Machine Learning, Deep Learning, and a demonstrated focus on Large Language Models (LLMs) to join our cutting-edge Data Science team. You will play a pivotal role in developing and deploying innovative AI solutions that drive real-world impact to patients and healthcare providers.
Responsibilities
• LLM Development and Fine-tuning: fine-tune, customize, and adapt large language models (e.g., GPT, Llama2, Mistral, etc.) for specific business applications and NLP tasks such as text classification, named entity recognition, sentiment analysis, summarization, and question answering. Experience in other transformer-based NLP models such as BERT, etc. will be an added advantage.
• Data Engineering: collaborate with data engineers to develop efficient data pipelines, ensuring the quality and integrity of large-scale text datasets used for LLM training and fine-tuning
• Experimentation and Evaluation: develop rigorous experimentation frameworks to evaluate model performance, identify areas for improvement, and inform model selection. Experience in LLM testing frameworks such as TruLens will be an added advantage.
• Production Deployment: work closely with MLOps and Data Engineering teams to integrate models into scalable production systems.
• Predictive Model Design and Implementation: leverage machine learning/deep learning and LLM methods to design, build, and deploy predictive models in oncology (e.g., survival models)
• Cross-functional Collaboration: partner with product managers, domain experts, and stakeholders to understand business needs and drive the successful implementation of data science solutions
• Knowledge Sharing: mentor junior team members and stay up to date with the latest advancements in machine learning and LLMs
Qualifications Required
• Doctoral or master’s degree in computer science, Data Science, Artificial Intelligence, or related field
• 5+ years of hands-on experience in designing, implementing, and deploying machine learning and deep learning models
• 12+ months of in-depth experience working with LLMs. Proficiency in Python and NLP-focused libraries (e.g., spaCy, NLTK, Transformers, TensorFlow/PyTorch).
• Experience working with cloud-based platforms (AWS, GCP, Azure)
Additional Skills
• Excellent problem-solving and analytical abilities
• Strong communication skills, both written and verbal
• Ability to thrive in a collaborative and fast-paced environment


Role Characteristics:
Analytics team provides analytical support to multiple stakeholders (Product, Engineering, Business development, Ad operations) by developing scalable analytical solutions, identifying problems, coming up with KPIs and monitor those to measure impact/success of product improvements/changes and streamlining processes. This will be an exciting and challenging role that will enable you to work with large data sets, expose you to cutting edge analytical techniques, work with latest AWS analytics infrastructure (Redshift, s3, Athena, and gain experience in the usage of location data to drive businesses. Working in a dynamic start up environment will give you significant opportunities for growth within the organization. A successful applicant will be passionate about technology and developing a deep understanding of human behavior in the real world. They would also have excellent communication skills, be able to synthesize and present complex information and be a fast learner.
You Will:
- Perform root cause analysis with minimum guidance to figure out reasons for sudden changes/abnormalities in metrics
- Understand objective/business context of various tasks and seek clarity by collaborating with different stakeholders (like Product, Engineering
- Derive insights and putting them together to build a story to solve a given problem
- Suggest ways for process improvements in terms of script optimization, automating repetitive tasks
- Create and automate reports and dashboards through Python to track certain metrics basis given requirements
- Automate reports and dashboards through Python
Technical Skills (Must have)
- B.Tech degree in Computer Science, Statistics, Mathematics, Economics or related fields
- 4-6 years of experience in working with data and conducting statistical and/or numerical analysis
- Ability to write SQL code
- Scripting/automation using python
- Hands on experience in data visualisation tool like Looker/Tableau/Quicksight
- Basic to advance level understanding of statistics
Other Skills (Must have)
- Be willing and able to quickly learn about new businesses, database technologies and analysis techniques
- Strong oral and written communication
- Understanding of patterns/trends and draw insights from those Preferred Qualifications (Nice to have)
- Experience working with large datasets
- Experience with AWS analytics infrastructure (Redshift, S3, Athena, Boto3)
- Hands on experience on AWS services like lambda, step functions, Glue, EMR + exposure to pyspark
What we offer
At GroundTruth, we want our employees to be comfortable with their benefits so they can focus on doing the work they love.
- Parental leave- Maternity and Paternity
- Flexible Time Offs (Earned Leaves, Sick Leaves, Birthday leave, Bereavement leave & Company Holidays)
- In Office Daily Catered Lunch
- Fully stocked snacks/beverages
- Health cover for any hospitalization. Covers both nuclear family and parents
- Tele-med for free doctor consultation, discounts on health checkups and medicines
- Wellness/Gym Reimbursement
- Pet Expense Reimbursement
- Childcare Expenses and reimbursements
- Employee assistance program
- Employee referral program
- Education reimbursement program
- Skill development program
- Cell phone reimbursement (Mobile Subsidy program)
- Internet reimbursement
- Birthday treat reimbursement
- Employee Provident Fund Scheme offering different tax saving options such as VPF and employee and employer contribution up to 12% Basic
- Creche reimbursement
- Co-working space reimbursement
- NPS employer match
- Meal card for tax benefit
- Special benefits on salary account
We are an equal opportunity employer and value diversity, inclusion and equity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Must-Have Skills & Qualifications:
- Bachelor's degree in Engineering (Computer Science, IT, or related field)
- 5–6 years of experience in manual testing of web and mobile applications
- Working knowledge of test automation tools: Selenium
- Experience with API testing using tools like Postman or equivalent
- Experience with BDD
- Strong understanding of test planning, test case design, and defect tracking processes
- Experience leading QA for projects and production releases
- Familiarity with Agile/Scrum methodologies
- Effective collaboration skills – able to work with cross-functional teams and contribute to automation efforts as needed
Good-to-Have Skills:
- Familiarity with CI/CD pipelines and version control tools (Git, Jenkins)
- Exposure to performance or security testing

Who we are
CoinCROWD is building the future of crypto spending with CROWD Wallet, a secure, user- friendly wallet designed for seamless cryptocurrency transactions in the real world. As the crypto landscape continues to evolve, CoinCROWD is at the forefront, enabling everyday consumers to use digital currencies like never before.
We’re not just another blockchain startup—we’re a team of innovators, dreamers, and tech geeks who believe in making crypto fun, easy, and accessible to everyone. If you love solving complex problems and cracking blockchain puzzles while sharing memes with your team, you’ll fit right in!
What you’ll be doing
As a key member of our engineering team, you will be responsible for designing, developing, and maintaining robust, scalable, and high-performance backend systems that support CoinCrowd’s
innovative products.
You will be responsible for...
• Designing and implementing blockchain-based applications using QuickNode, Web3Auth, and Python or Node.js.
• Developing and maintaining smart contracts and decentralized applications (dApps) with a focus on security and scalability.
• Integrating blockchain solutions with CROWD Wallet and other financial systems.
• Collaborating with frontend developers, product managers, and other stakeholders to deliver seamless crypto transactions.
• Ensuring high availability and performance of blockchain infrastructure.
• Managing on-chain and off-chain data synchronization.
• Researching and implementing emerging blockchain technologies to improve CoinCROWD’s ecosystem.
• Troubleshooting and resolving blockchain-related issues in a timely manner.
• Ensuring compliance with blockchain security best practices and industry regulations.
• Contributing to architectural decisions and best practices for blockchain development.
What we need from you
We're looking for dynamic, self-motivated individuals who are excited about shaping the future of crypto spending;
• This will be a Full-time role.
• May require occasional travel for conferences or team meetups (Yes, we love those!).
• Bonus points if you can beat our CTO in a game of chess or share an awesome crypto meme!
• Location - WFH initially
What skills & experience you’ll bring to us
• 3 to 10 years of experience in software development, with at least 3+ years in blockchain development.
• Strong knowledge of QuickNode and Web3Auth for blockchain infrastructure and authentication.
• Proficiency in Python or Node.js for backend development.
• Experience in developing, deploying, and auditing smart contracts (Solidity, Rust, orequivalent).
• Hands-on experience with Ethereum, Polygon, or other EVM-compatible blockchains.
• Familiarity with DeFi protocols, NFT standards (ERC-721, ERC-1155), and Web3.js/ Ethers.js.
• Understanding of blockchain security best practices and cryptographic principles.
• Experience working with RESTful APIs, GraphQL, and microservices architecture.
• Strong problem-solving skills and ability to work in a fast-paced startup environment.
• Excellent communication skills and ability to collaborate effectively with cross-functional teams.


Role Description
This is a full-time, remote role for a Frappe and ERPNext Developer. The Developer will be responsible for designing, developing, and maintaining Frappe and ERPNext applications. Daily tasks include customizing modules, integrating third-party systems, troubleshooting and resolving software issues, and working closely with cross-functional teams to enhance system efficiency and user experience.
Qualifications
- Proficiency in Frappe and ERPNext development
- Experience with Python, JavaScript, and web technologies
- Understanding of ERP workflows and business processes
- Skills in database management (MySQL, PostgreSQL)
- Strong problem-solving and troubleshooting abilities
- Ability to work independently and remotely
- Excellent communication and teamwork skills
- Bachelor's degree in Computer Science, Information Technology, or related field
- Experience in the healthcare industry is a plus
- Have experience in customised the frappe crm



Title - Sr Software Engineer
Company Summary :
As the recognized global standard for project-based businesses, Deltek delivers software and information solutions to help organizations achieve their purpose. Our market leadership stems from the work of our diverse employees who are united by a passion for learning, growing and making a difference. At Deltek, we take immense pride in creating a balanced, values-driven environment, where every employee feels included and empowered to do their best work. Our employees put our core values into action daily, creating a one-of-a-kind culture that has been recognized globally. Thanks to our incredible team, Deltek has been named one of America's Best Midsize Employers by Forbes, a Best Place to Work by Glassdoor, a Top Workplace by The Washington Post and a Best Place to Work in Asia by World HRD Congress. www.deltek.com
Business Summary :
The Deltek Engineering and Technology team builds best-in-class solutions to delight customers and meet their business needs. We are laser-focused on software design, development, innovation and quality. Our team of experts has the talent, skills and values to deliver products and services that are easy to use, reliable, sustainable and competitive. If you're looking for a safe environment where ideas are welcome, growth is supported and questions are encouraged – consider joining us as we explore the limitless opportunities of the software industry.
External Job Title :
Sr Software Engineer
Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React.
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 4-6 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.

Key Responsibilities
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Requirements
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.


Who We Are
Studio Management (studiomgmt.co) is a uniquely positioned organization combining venture capital, hedge fund investments, and startup incubation. Our portfolio includes successful ventures like Sentieo (acquired by AlphaSense for $185 million), as well as innovative products such as Emailzap (emailzap.co) and Mindful Minutes for Toddlers. We’re expanding our team to continue launching products at the forefront of technology, and we’re looking for an Engineering Lead who shares our passion for building the “next big thing.”
The Role
We are seeking a hands-on Engineering Lead to guide our product development efforts across multiple high-impact ventures. You will own the overall technical vision, mentor a remote team of engineers, and spearhead the creation of new-age products in a fast-paced startup environment. This is a strategic, influential role that requires a blend of technical prowess, leadership, and a keen interest in building products from zero to one.
Responsibilities
- Technical Leadership: Define and drive the architectural roadmap for new and existing products, ensuring high-quality code, scalability, and reliability.
- Mentorship & Team Building: Hire, lead, and develop a team of engineers. Foster a culture of continuous learning, ownership, and collaboration.
- Product Innovation: Work closely with product managers, designers, and stakeholders to conceptualize, build, and iterate on cutting-edge, user-centric solutions.
- Hands-On Development: Write efficient, maintainable code and perform thorough code reviews, setting the standard for engineering excellence.
- Cross-Functional Collaboration: Partner with different functions (product, design, marketing) to ensure alignment on requirements, timelines, and deliverables.
- Process Optimization: Establish best practices and processes that improve development speed, code quality, and overall team productivity.
- Continuous Improvement: Champion performance optimizations, new technologies, and modern frameworks to keep the tech stack fresh and competitive.
What We’re Looking For
- 4+ Years of Engineering Experience: A proven track record of designing and delivering high-impact software products.
- Technical Mastery: Expertise in a full-stack environment—HTML, CSS, JavaScript (React/React Native), Python (Django), and AWS. Strong computer science fundamentals, including data structures and system design.
- Leadership & Communication: Demonstrated ability to mentor team members, influence technical decisions, and articulate complex concepts clearly.
- Entrepreneurial Mindset: Passion for building new-age products, thriving in ambiguity, and rapidly iterating to find product-market fit.
- Problem Solver: Adept at breaking down complex challenges into scalable, efficient solutions.
- Ownership Mentality: Self-driven individual who takes full responsibility for project outcomes and team success.
- Adaptability: Comfort working in an environment where priorities can shift quickly, and opportunities for innovation abound.
Why Join Us
- High-Impact Work: Drive the technical direction of multiple ventures, shaping the future of new products from day one.
- Innovation Culture: Operate in a remote-first, collaborative environment that encourages bold thinking and rapid experimentation.
- Growth & Autonomy: Enjoy opportunities for both leadership advancement and deepening your technical skillset.
- Global Team: Work alongside a diverse group of talented professionals who share a passion for pushing boundaries.
- Competitive Benefits: Receive market-leading compensation and benefits in a role that rewards both individual and team success.



Position Responsibilities :
- Develop and manage integrations with third-party services and APIs using industry-standard protocols like OAuth2 for secure authentication and authorization.
- Develop scalable, performant APIs for Deltek products
- Accountability for the successful implementation of the requirements by the team.
- Troubleshoot, debug, and optimize code and workflows for better performance and scalability.
- Undertake analysis, design, coding and testing activities of complex modules
- Support the company’s development processes and development guidelines including code reviews, coding style and unit testing requirements.
- Participate in code reviews and provide mentorship to junior developers.
- Stay up-to-date with emerging technologies and best practices in Python development, AWS, and frontend frameworks like React. And suggest optimisations based on them
- Adopt industry best practices in all your projects - TDD, CI/CD, Infrastructure as Code, linting
- Pragmatic enough to deliver an MVP, but aspirational enough to think about how it will work with millions of users and adapt to new challenges
- Readiness to hit the ground running – you may not know how to solve everything right off the bat, but you will put in the time and effort to understand so that you can design architecture of complex features with multiple components.
Qualifications :
- A college degree in Computer Science, Software Engineering, Information Science or a related field is required
- Minimum 8-10 years of experience Sound programming skills on Python, .Net platform (VB & C#), TypeScript / JavaScript, Frontend technologies like React.js/Ember.js, SQL Db (like PostgreSQL)
- Experience in backend development and Apache Airflow (or equivalent framework).
- Build APIs and optimize SQL queries with performance considerations.
- Experience with Agile Development
- Experience in writing and maintaining unit tests and using testing frameworks is desirable
- Exposure to Amazon Web Services (AWS) technologies, Terraform, Docker is a plus
- Strong desire to continually improve knowledge and skills through personal development activities and apply their knowledge and skills to continuous software improvement.
- The ability to work under tight deadlines, tolerate ambiguity and work effectively in an environment with multiple competing priorities.
- Strong problem-solving and debugging skills.
- Ability to work in an Agile environment and collaborate with cross-functional teams.
- Familiarity with version control systems like Git.
- Excellent communication skills and the ability to work effectively in a remote or hybrid team setting.

About Asha Health
Asha Health helps medical practices launch their own AI clinics. We're backed by Y Combinator and recently raised an oversubscribed seed round from some of the best investors in Silicon Valley. Our team includes AI product leaders from companies like Google, physician executives from major health systems, and more.
About the Role
We're looking for an exceptional engineer to join our engineering team (currently 6 teammates). We're looking for someone who is an all-rounder, but has particularly exceptional backend engineering skills.
In this role, you will have the opportunity to build state-of-the-art AI agents, and learn what it takes to build an industry-leading multimodal, multi-agent suite.
Responsibilities
You'll wear many hats. Your responsibilities will fall into 3 categories:
Full-Stack Engineering (80% backend, 20% frontend)
- Lead the team in designing scalable architecture to support performant web applications.
- Develop features end-to-end for our web applications (Typescript, nodeJS, python etc).
AI Engineering
- Develop AI agents with a high bar for reliability and performance.
- Build SOTA LLM-powered tools for providers, practices, and patients.
- Architect our data annotation, fine tuning, and RLHF workflows.
Product Management
- Propose, scope, and prioritize new feature ideas. Yes, engineers on our team get to be leaders and propose new features themselves!
- Lead the team in building best-in-class user experiences.
Requirements
You do not need AI experience to apply to this role. While we prefer candidates that have some AI experience, we have hired engineers before that do not have any, but have demonstrated that they are very fast learners.
We prefer candidates who have worked as a founding engineer at an early stage startup (Seed or Preseed) or a Senior Software Engineer at a Series A or B startup.
Perks of Working at Asha Health
#1 Build cool stuff: work on the latest, cutting-edge tech (build frontier AI agents with technologies that evolve every 2 weeks).
#2 Surround yourself with top talent: our team includes senior AI product leaders from companies like Google, experienced physician executives, and top 1% engineering talent (the best of the best).
#3 Rocketship trajectory: we get more customer interest than we have time to onboard, it's a good problem to have :)


We're at the forefront of creating advanced AI systems, from fully autonomous agents that provide intelligent customer interaction to data analysis tools that offer insightful business solutions. We are seeking enthusiastic interns who are passionate about AI and ready to tackle real-world problems using the latest technologies.
Duration: 6 months
Perks:
- Hands-on experience with real AI projects.
- Mentoring from industry experts.
- A collaborative, innovative and flexible work environment
After completion of the internship period, there is a chance to get a full-time opportunity as AI/ML engineer (Up to 12 LPA).
Compensation:
- Joining Bonus: A one-time bonus of INR 2,500 will be awarded upon joining.
- Stipend: Base is INR 8000/- & can increase up to 20000/- depending upon performance matrix.
Key Responsibilities
- Experience working with python, LLM, Deep Learning, NLP, etc..
- Utilize GitHub for version control, including pushing and pulling code updates.
- Work with Hugging Face and OpenAI platforms for deploying models and exploring open-source AI models.
- Engage in prompt engineering and the fine-tuning process of AI models.
Requirements
- Proficiency in Python programming.
- Experience with GitHub and version control workflows.
- Familiarity with AI platforms such as Hugging Face and OpenAI.
- Understanding of prompt engineering and model fine-tuning.
- Excellent problem-solving abilities and a keen interest in AI technology.
To Apply Click below link and submit the Assignment

Job Title: IT and Cybersecurity Network Backend Engineer (Remote)
Job Summary:
Join Springer Capital’s elite tech team to architect and fortify our digital infrastructure, ensuring robust, secure, and scalable backend systems that power cutting‑edge investment solutions.
Job Description:
Founded in 2015, Springer Capital is a technology-forward asset management and investment firm that redefines financial strategies through innovative digital solutions. We identify high-potential opportunities and leverage advanced technology to drive value, transforming traditional investment paradigms. Our culture is built on agility, creative problem-solving, and a relentless pursuit of excellence.
Job Highlights:
As an IT and Cybersecurity Network Backend Engineer, you will play a central role in designing, developing, and securing our backend systems. You’ll be responsible for creating bulletproof server architectures and integrating sophisticated cybersecurity measures to ensure our digital assets remain secure, reliable, and scalable—all while working fully remotely.
Responsibilities:
- Backend Architecture & Security:
- Design, develop, and maintain high-performance backend systems and RESTful APIs using technologies such as Python, Node.js, or Java.
- Implement advanced cybersecurity protocols including encryption, multi-factor authentication, and anomaly detection to safeguard our infrastructure.
- Network Infrastructure Management:
- Architect secure cloud and hybrid network solutions to protect sensitive data and ensure uninterrupted service.
- Develop robust logging, monitoring, and compliance mechanisms.
- Collaborative Innovation:
- Partner with cross-functional teams (DevOps, frontend, and product managers) to integrate security seamlessly into every layer of our technology stack.
- Participate in regular security audits, agile sprints, and technical reviews.
- Continuous Improvement:
- Keep abreast of emerging technologies and cybersecurity threats, proposing and implementing innovative solutions to maintain system integrity.
What We Offer:
- Advanced Learning & Mentorship: Work side-by-side with industry experts who will empower you to push the boundaries of cybersecurity and backend engineering.
- Impactful Work: Engage in projects that directly influence the security and scalability of our revolutionary digital investment strategies.
- Dynamic, Remote Culture: Thrive in a flexible, remote-first environment that champions creativity, collaboration, and work-life balance.
- Career Growth: Unlock long-term career advancement opportunities in a forward-thinking organization that values innovation and initiative.
Requirements:
- Degree (or current enrollment) in Computer Science, Cybersecurity, or a related field.
- Proficiency in at least one backend programming language (Python, Node.js, or Java) and hands-on experience with RESTful API design.
- Solid understanding of network security principles and experience implementing cybersecurity best practices.
- Passionate about designing secure systems, solving complex technical challenges, and staying ahead of industry trends.
- Strong analytical and communication skills, with the ability to work effectively in a collaborative, fast-paced environment.
About Springer Capital:
At Springer Capital, we blend financial expertise with digital innovation to shape tomorrow’s investment landscape. Our relentless drive to merge technology and asset management has positioned us as leaders in transforming traditional finance into dynamic, tech-enabled ventures.
Location: Global (Remote)
Job Type: Full-time
Pay: $50 USD per month
Work Location: Remote
Embark on your next challenge with Springer Capital—where your technical prowess and dedication to security help safeguard the future of digital investments.

Job Title : Technical Architect
Experience : 8 to 12+ Years
Location : Trivandrum / Kochi / Remote
Work Mode : Remote flexibility available
Notice Period : Immediate to max 15 days (30 days with negotiation possible)
Summary :
We are looking for a highly skilled Technical Architect with expertise in Java Full Stack development, cloud architecture, and modern frontend frameworks (Angular). This is a client-facing, hands-on leadership role, ideal for technologists who enjoy designing scalable, high-performance, cloud-native enterprise solutions.
🛠 Key Responsibilities :
- Architect scalable and high-performance enterprise applications.
- Hands-on involvement in system design, development, and deployment.
- Guide and mentor development teams in architecture and best practices.
- Collaborate with stakeholders and clients to gather and refine requirements.
- Evaluate tools, processes, and drive strategic technical decisions.
- Design microservices-based solutions deployed over cloud platforms (AWS/Azure/GCP).
✅ Mandatory Skills :
- Backend : Java, Spring Boot, Python
- Frontend : Angular (at least 2 years of recent hands-on experience)
- Cloud : AWS / Azure / GCP
- Architecture : Microservices, EAI, MVC, Enterprise Design Patterns
- Data : SQL / NoSQL, Data Modeling
- Other : Client handling, team mentoring, strong communication skills
➕ Nice to Have Skills :
- Mobile technologies (Native / Hybrid / Cross-platform)
- DevOps & Docker-based deployment
- Application Security (OWASP, PCI DSS)
- TOGAF familiarity
- Test-Driven Development (TDD)
- Analytics / BI / ML / AI exposure
- Domain knowledge in Financial Services or Payments
- 3rd-party integration tools (e.g., MuleSoft, BizTalk)
⚠️ Important Notes :
- Only candidates from outside Hyderabad/Telangana and non-JNTU graduates will be considered.
- Candidates must be serving notice or joinable within 30 days.
- Client-facing experience is mandatory.
- Java Full Stack candidates are highly preferred.
🧭 Interview Process :
- Technical Assessment
- Two Rounds – Technical Interviews
- Final Round